AI in Employment Practice: Navigating Risks and Ensuring Compliance
Introduction
Artificial Intelligence (AI) is rapidly reshaping Nigerian workplaces. From recruitment and performance management to employee monitoring, AI-powered tools are being integrated into human resource (HR) practices to speed up processes and improve decision-making. However, the use of AI comes with legal and ethical challenges that current Nigerian employment laws are yet to fully address. This article explores the legal framework governing AI use, contemporary trends, key risks including algorithmic bias and privacy breaches. It also offers practical recommendations for curbing legal risks and safeguarding employee rights while adopting AI.
Legal Framework
There is presently no primary legislation for AI use in Nigeria. This regulatory gap creates uncertainty for employers and employees alike. However, within the context of employment, two sets of laws are largely applicable:
- The Nigerian Constitution which provides a full range of fundamental rights (for example, the right to freedom from discrimination). It applies to every aspect of the lives of citizens of Nigeria particularly as a safeguard against the negative impacts of AI use.
- The NDPA and its General Application and Implementation Directive, 2025 (GAID) introduce comprehensive rules for processing personal data particularly in the context of AI use. The rules cover compliance with data processing principles such as transparency, data minimisation, confidentiality, security and integrity of personal data as well as the conduct of data protection impact assessments before AI use. The Nigerian Data Protection Commission (NDPC) which enforces the NDPA, is increasingly active in oversight and enforcement, signaling the importance of compliance.
Current Trends and the Role of AI in Human Resources
The use of AI is no longer theoretical, it is gaining momentum in Nigerian workplaces. In view of the efficiency of AI systems, HR departments have begun adopting AI for:
- Resume screening: AI is used to filter prospective employees based on keywords and qualifications.
- Psychometric scoring: AI is used to evaluate personality traits and cognitive skills.
- Video interview analysis: AI is utilised to assess facial expressions and body language of prospective employees during virtual interview sessions.
- Performance analytics: AI tracks work patterns over time and predicts employee success or shortcomings in the workplace.
While the above uses ensure operational efficiency, they also introduce significant risks impacting workplace fairness and legal compliance. Such risks will be discussed subsequently.
Key Legal Risks in AI-Driven Employment Processes
AI systems are trained with historical data, which may include biases reflecting real-world issues such as personal data breaches, gender inequality, racism and various forms of discrimination. This can create significant risks when AI systems are used. Some key risks include:
- Discriminatory Outcomes
Having been trained on historical data, AI may discriminate against individuals on the basis of protected characteristics. Protected characteristics include ethnic group, place of origin, sex (gender), religion, circumstance of birth or political opinion. The Nigerian Constitution prohibits restrictions or deprivations based on such characteristics especially if other Nigerian citizens are not subjected to the same restrictions or deprivations.
Discriminatory outcomes flowing from AI use are perfectly illustrated by Amazon’s AI hiring tool case. In 2014, Amazon developed an AI hiring tool which was intended to automate and streamline resume screening to efficiently identify the best talent among thousands of applicants. However, this AI system exhibited gender bias because it was trained on historical male-dominated hiring data. The system ended up favouring male applicants over female applicants. This highlights how AI can reinforce existing social and institutional inequalities based on biases inherent in training data.
- Opaque Decision-Making
Many AI algorithms operate as "black boxes". As such, they provide little transparency on how their decisions are reached. Within the context of employment, this limits accountability and makes it difficult for employers to demonstrate fairness in AI use or for employees to challenge decisions that impact them negatively.
- Privacy Breaches
AI tools often require extensive personal data processing, including behavioural and biometric information. Such processing without lawful basis or consent risks violating the NDPA and the constitutional right to privacy.
- Unfair Dismissal or Discrimination Claims
In view of the above and other risks, AI-driven decisions can lead to discrimination in applicant engagement, wrongful termination or even denial of promotion and other benefits. Employees and applicants alike can bring claims in court to oppose such decisions. Failure to provide transparency and other safeguards increases employers’ vulnerability to such claims. Needless to say, successful claims lead to damages awards or out-of-court settlements. This is undesirable for any organisation that values its finances.
Safeguards and Compliance Measures Employers Should Implement
The absence of specific AI legislation does not absolve employers of any liability where AI is used without due care. Recourse must be had to existing legislation that applies to AI use in one way or the other. It is also advisable for organisations to consider and adopt international standards and best practices where AI use is concerned. To effectively navigate legal risks and uphold employee rights, Nigerian employers using AI for employment purposes should consider the following practical measures:
- Conduct Algorithmic and Data Protection Impact Assessments (DPIAs): Algorithmic risk assessment involves evaluating the potential impact and risks of algorithms within an AI system before its use. It also entails ascertaining best practices to curb such risks. This would help employers understand how such systems reach decisions that impact employees. DPIAs on the other hand, involve the assessment of personal data processing activities (in this case, while using AI). Such assessment aims to identify and minimise privacy risks that would likely result in a high risk to the rights and freedoms of data subjects.
- Document Lawful Basis for Data Processing and Obtain Consent When Needed: Under the NDPA, personal data must be processed (used) under certain lawful bases such as consent, legitimate interest, contractual basis amongst others. It is important that where personal data of employees and candidates are processed with AI, lawful bases must be relied on and documented. The documentation process is important to evidence compliance, especially to the NDPC and other regulatory agencies as appropriate.
- Vendor and Product Vetting: Before an AI vendor is engaged or their product is used, both must be adequately vetted to ensure that their products are not prone to risks. Objective data and client testimonials should be considered in addition to representations made by such vendors. This will enable employers to understand the obvious risks associated with such vendors and their systems before engagement or implementation.
- Ensure Explainability to Regulators and Employees: Conducting proper vendor and product due diligence as described above can enable employers understand how the AI system proposed to be used, reaches its decisions. This is not a surface understanding of how prompts or data entered into such systems generate results, but in-depth awareness of how the system’s algorithms work; how the various parts of the system come together to meet the ends of the user. This understanding aids explainability to regulators or to employees who question decisions made about them with the aid of AI.
- Periodic testing: AI systems should be periodically audited to detect and mitigate biases based on protected characteristics.
- Introduce Human Oversight: Employers should avoid fully automated decisions. Humans should review all AI recommendations, especially for hiring and termination decisions. This helps maintain accountability.
- AI Governance: Introduction of AI governance is important for effective management of AI-associated risks. Governance in this respect involves the establishment of structured processes, policies and oversight mechanisms to ensure that AI use aligns with the law, organisational values and ethical standards. Such policies and mechanisms ensure clear safeguards and responsibilities where AI use is concerned. This culminates in risk mitigation, fostering of employee rights and responsible innovation.
Conclusion
AI integration in the workplace brings about efficiency. However, such use could easily negatively impact an organisation’s bottom line and reputation if safeguards are not implemented. In the court of law or public opinion, employers bear responsibility for the impact of AI systems on employees. As such, the benefits of such systems must be balanced with obligations under law to employees. Robust compliance with law and best practice standards is not a nice-to-have, it is a must-have.
By carefully implementing safeguards and ensuring lawful compliance, Nigerian employers can position themselves as leaders in ethical AI implementation not only in Nigeria but globally.
Should you require advice with respect to employment-related issues, please do not hesitate to contact us at [email protected]


