Mandisi Dube

21sr Century client executive, Mandisi Dube.

The Draft National AI Policy identifies several specific risks that make HR AI governance urgent, writes Mandisi Dube, client executive at 21st Century.

Fairness risk means models can reproduce past bias or create new unfair outcomes. Privacy risk arises because HR data is sensitive and must not be overused, misused, or exposed.

Transparency risk means managers and employees may not understand how a model reached a recommendation. Accountability risk blurs responsibility when a bad decision is made. Data quality risk allows poor or inconsistent data to produce misleading outputs.

Finally, governance risk emerges when tools are introduced faster than the organisation’s policies, controls and oversight structures.

AI governance in HR must begin with a critical audit of the historical data that feeds it. Without this foundational step even the most sophisticated analytics will deliver flawed outcomes and expose the organisation to POPIA liability, EEA non-compliance and misalignment with the national AI policy’s emphasis on fairness, non-discrimination and human-centred values.

Drawing on the Draft National AI Policy’s strategic pillar 3 (Responsible Governance) and strategic pillar 4 (Ethical and Inclusive AI), responsible intent becomes practical control through eight connected building blocks.

  1. Governance principles: Of the policy’s six key principles: fairness, reliability, safety, privacy, security, inclusiveness, transparency and accountability, two in particular – explainability and data quality – reflect the specific demands of HR decision-making, where employees are entitled to understand how a recommendation about them was reached.
  2. Use-case classification: Separate low, medium and high-risk use cases so that controls are proportional to the seriousness of the people decision, reflecting the policy’s risk-based approach inspired by international frameworks such as the EU AI Act.
  3. Data governance: Define approved data sources, ownership, access, retention, lineage and quality controls, consistent with POPIA and the policy’s emphasis on data protection by design and default.
  4. Model governance: Document purpose, input variables, validation, fairness testing, monitoring and retirement rules, incorporating the policy’s call for regular algorithmic audits and bias testing.
  5. Human oversight: Decide what the system can recommend and what a human must always review or approve, directly applying the policy’s Human-in-the-Loop (HITL) approach and the principle of human control of technology.
  6. Policy and compliance alignment: Translate the framework into practical rules, standards and approval checkpoints, ensuring alignment with the proposed AI ethics board, AI regulatory authority and sectoral strategies.
  7. Monitoring and audit: Track bias, drift, complaints, exceptions, overrides and performance over time, as required by the policy’s monitoring processes and mandatory reporting frameworks.
  8. Communication and trust: Explain what the tool does, what data it uses and how employees can question an outcome, supporting the policy’s goals of sufficient transparency and sufficient explainability.

The Draft South Africa National AI Policy makes one thing clear: the question is no longer whether HR should use AI but how AI must be governed.

 

 

Subscribe to ESG Global newsletter

 

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

ESG Global will use the information you provide on this form to be in touch with you and to provide updates and marketing.