Artificial intelligence has slowly woven itself into the fabric of industries across the globe, and human resources is no exception. Engagement surveys and pulse feedback, recognition engines, performance scoring, and even general employee sentiment monitoring are ripe for AI to make waves. 2026 won’t be a free-for-all, though. HR teams are tasked with deploying AI in a responsible way.
WorkTango has examined the key data and policies from leading organizations and reports, including the NYC Department of Consumer and Worker Protections, Employment Discrimination and AI for Workers (EEOC), the U.S. Department of Labor, McKinsey, and more. The results show that HR leaders are under increasing pressure to innovate with AI without crossing legal and ethical boundaries. This practitioner’s playbook for integrating AI into HR workflows can help you revamp your organization via actionable checklists, vetting questions, policy templates, and a blueprint for safer AI governance.
Understanding the regulatory landscape
Artificial intelligence is a technology still in its infancy, with many states enacting regulations created to monitor it, even as the federal government pushes back against AI regulations. Before building an HR and AI joint program, you need to understand the guardrails you may face.
These aren’t just abstract guidelines. Violating them carries the risk of audits, penalties, and litigation, so ensuring compliance is absolutely crucial.
NYC Local Law 144: The first binding U.S. AI employment law
New York City’s Automated Employment Decision Tools law became the first binding U.S. AI hiring regulation in existence. Enforcement officially began back in mid-2023. This law primarily requires:
- Independent annual bias audits to be made publicly available.
- Disclosure to candidates and employees about the AI tools used.
- Notice before use of AI systems.
Even organizations that don’t operate in New York City have since developed similar laws or standards, primarily because the law outlines concrete and enforceable policies to regulate artificial intelligence. The broader NYC Department of Consumer and Worker Protection maintains compliance guidelines and can be referenced for businesses operating within the boundaries of the state.
Federal guidance and enforcement
On Dec. 11, the Trump administration announced an executive order to remove barriers for the AI industry, including those set by states. However, as the Brennan Center for Justice notes, the Constitution does not allow the president to stop states from taking these measures.
The executive order states that the administration will work with Congress to enact a single national standard that forbids states from passing laws that are not aligned with the future federal ruling. It is vital that professionals using AI pay close attention to the regulatory space to see how this plays out.
Beyond the US
The EU Artificial Intelligence Act applies to U.S.-based organizations if there are EU-based employees, EU data passing through AI systems, or vendors operating models within the EU. If any of these apply, your organization must be in compliance with the rules of the act.
Specifically, HR tools are being categorized by various organizations as high risk, meaning they require conformity assessments, risk logs, human oversight, and transparency. The act was approved in early 2024, but implementation will slowly roll out through 2027. Due to the prolonged rollout, as well as domestic regulations, organizations have begun scrambling to prepare now to ensure compliance.
Data minimization and bias audits
The fastest path to an HR and AI compliance failure is over-collecting data or deploying AI models without any type of bias assessments. Avoiding this complication is easy if following the principle of data minimization, which essentially outlines sticking to a simple rule: Collect only data that is needed, use only what is necessary, and delete everything else. In the context of HR, this means not overanalyzing more employee data than is necessary for the purpose of a task, avoiding using sensitive attributes, and not keeping archives.
Bias audits are nonnegotiable. For those unfamiliar with the term in relation to AI, bias refers to systematic and unfair discrimination in the output of an artificial intelligence system due to biased input data. A report from the BMC Medical Informatics and Decision Making research journal, published in August 2025, noted that researchers identified instances of gender-bias with AI.
Google’s AI, Gemini, ended up referring to men’s health issues with terms like “disabled” or “complex,” whereas it described women’s health with similar issues as less serious. While an extreme example, it underscores why regulators expect organizations to test AI models for bias at least on an annual basis. Ideally, testing is more thorough and completed before deployment, during use, and after updates as well. A legitimate data audit should include:
- Independent third-party testing.
- Testing across all protected categories.
- Clear documentation of methodology.
- Public posting (if in New York City).
- Action plans for any disparities found.
Minimizing data can seem impossible given the size of many files. With that said, this checklist can help you make sure only the most important information is retained:
- Document the specific business purpose for each data element collected.
- Remove or pseudonymize personally identifiable information where possible.
- Set automatic deletion timelines for raw data and logs.
- Hire independent auditors with AI bias certification.
- Test across all protected categories (race, gender, age, disability, etc.).
- Maintain an audit repository accessible to compliance officers.
- Publish bias audit summaries when required.
Employee notice, transparency, and opt-out rights
Employees are more likely to accept AI if they actually understand it. Hidden monitoring or surprise automation is a quick way to sow distrust among your employees and potentially even open the door to legal issues. Depending on your jurisdiction, you may actually be required to inform your employees of any use of AI in HR-related tasks.
EU AI Act mandates worker access to any assessments and explanations. Similarly, a few years back, the National Labor Relations Board issued a framework against unlawful surveillance of employees, which regulators are applying to AI. In practice, this generally means that employees need to be told when AI is used, what is being collected, what decisions are automated, and how to opt out if they can.
When issuing an AI transparency notice, avoid any technical jargon that may make things unclear for your employees. Always emphasize the scope of the technology and be clear on escalation paths if employees are uncomfortable. Not every AI system requires an opt-out, but any voluntary or wellness-oriented AI system should probably offer one as a best practice.
Employee FAQ template
Here are some sample employee questions that may come up when implementing a joint HR and AI system, along with some sample answers that can be tailored to your specific organization:
- Q: How is AI being used in our workplace?
AI tools are used to collect, categorize, and summarize employee feedback, surveys, and recognition submissions. AI does not make final employment decisions.
- Q: What data about me is being collected?
Only the information required for surveys, recognition programs, or feedback workflows are collected. Sensitive data is either excluded or anonymized.
- Q: Can AI decisions about me be wrong or biased?
AI is imperfect, and our systems are audited regularly to prevent bias. All final decisions involve human review.
- Q: Can I opt out of AI analysis?
For voluntary programs, you may opt out anytime using the link in the HR portal.
- Q: Who can I contact if I have concerns?
Email the HR privacy hotline or speak with your HR representative.
- Q: How do I access or correct my data?
Submit a request via the HR privacy portal, and we will fulfill it within all statutory timelines.
Human oversight and vendor due diligence
Even highly trusted AI systems can create liabilities from time to time for both employers and vendors. The best protection is a governance model that keeps humans in control at all times and vendors accountable, meaning AI cannot make final adverse actions, and humans need to be able to override AI recommendations.
When humans act simply as rubber stamps, automatically approving anything AI outputs, it’s treated as if AI made the decision and is often prohibited. A compliant human review model should include someone who understands the AI model, a defined escalation threshold, and guidance on when to reject AI suggestions. It’s also best practice to document any reasons for overrides and to assign role-based access to explanations from a model. This will help to prevent the so-called rubber-stamp effect.
Vendor due diligence checklist
Partnering with vendors who utilize AI models is challenging, given the lack of control over the model. However, that’s no excuse to avoid checking for discriminatory outcomes, as you can still be held liable. To avoid this, consider the following due diligence checklist:
- Request and review bias audit reports.
- Assess model explainability documentation.
- Verify compliance certifications (ISO 42001, NIST AI RMF).
- Confirm data privacy compliance (GDPR, CCPA).
- Obtain contractual indemnification and warranties.
- Establish ongoing monitoring commitments.
Transparent reporting and ongoing monitoring concerns
AI governance is not a one-and-done task. Ongoing monitoring and reporting will both be essential for compliance and trust purposes. Any internal reporting should include quarterly dashboards, bias metrics, model update logs, opt-out rates, incident reports, and other useful information which can inform you of how the deployment is going. Legal, HR, and leadership teams should be the groups reviewing this data.
However, transparency is equally important. The 2024 Zendesk CX Trends Report found that 75% of organizations polled believed that a lack of transparency could lead to customer churn. To avoid this, publish bias audit findings while also providing summaries of any AI systems you are using. To your employees, consider offering access and explanation entitlements.
Reporting best practices checklist
The last thing you want is to accidentally bump into legal issues or lose a customer because they didn’t understand your AI integration. The following are best practices for when you’re reporting on AI in your organization:
- Establish a quarterly review cadence for bias metrics.
- Create accessible dashboards for internal stakeholders.
- Publish public transparency reports with audit results.
- Document remediation actions taken.
- Conduct annual third-party audits.
HR can deploy AI safely when governance comes first
AI in HR is no longer an experimental idea. It is a regulated, litigated, and heavily scrutinized reality. AI has the power to deliver enormous value to an organization in the form of better insights, faster workflows, and a more equitable process if deployed responsibly.
Keep your AI in compliance through data minimization, independent bias audits, vendor due diligence, and always having a human in the loop. Organizations that embrace these principles will not only be able to avoid compliance landmines, but also build employee trust while unlocking the full potential of AI in the workplace.
This story was produced by WorkTango and reviewed and distributed by Stacker.