Department of Labor’s AI Best Practices: Key Takeaways for Employers

Blog Post

In October 2024, the Department of Labor released its principles and best practices for developers and employers using AI (pdf), aiming to provide employers with guidelines to promote workplace augmentation through the use of AI while also mitigating inherent risks AI poses to the workforce itself. For example, AI has great potential to improve worker productivity and well-being; however, without the proper guardrails, use of AI in the workplace could result in negative outcomes such as job displacement, bias, and discrimination.

Employers that follow the DOL’s optional guidance will likely take positive steps toward compliance with rapidly evolving and emerging AI laws and regulations.

Key Principles and Best Practices from the Department of Labor

The DOL’s AI principles and best practices center around eight pillars. Here are key takeaways for each:

1. Centering Worker Empowerment

 The DOL considers this principle the “north star” of its guidance. In developing and deploying AI, all workers, including those from underserved communities, should be informed of and able to provide input in the company’s AI systems.

As a best practice, employers should institute an AI workplace policy that covers the DOL’s guidelines and includes a reporting mechanism for employees to report concerns with AI systems and suggestions on improvement. Because DOL takes the position that workers should have a right to provide input, we can presume DOL also takes the position that an employee is protected from retaliation for providing input.

2. Ethical Development of AI

Employers should establish ethical AI standards before developing or deploying AI. To ensure compliance with these ethical standards, employers should conduct impact assessments and commission independent audits to mitigate risks associated with workers’ safety, labor standards, and job quality.

3. AI Governance and Human Oversight

In developing AI governance structures and policies, input from across organizational components is critical, from leadership to workers. Employees should be trained on appropriate use cases and prohibited uses. Employers should implement meaningful human oversight of the AI systems and documentation requirements, especially to mitigate high-risk use cases, such as employment decisions. The easiest way to begin compliance with this initiative is to designate someone, like a compliance officer, to be responsible for overseeing the company’s use of AI.

4. Ensuring Transparency in AI Use

The DOL recommends that businesses provide advance notice and disclosure to any worker-impacting AI. This type of AI may include systems that analyze worker behavior or generative AI tools available to workers. Ideally, the disclosure would include an explanation on employee use cases, how the AI system will monitor employees, what data will be collected, and the purpose of the AI system. Such transparency should also be communicated in a clear and accessible manner.

5. Labor and Employment Rights

Employers should not undermine worker rights such as federally protected leave, break time, or accommodations. Federal labor standards, such as the Fair Labor Standards Act, still apply when using AI in the workplace. Employers should also audit AI systems for any disparate or adverse impacts on protected bases, such as race, color, national origin, religion, sex, disability, age, and genetic information. By encouraging workers to raise concerns about AI systems, employers can help mitigate legal risks associated with AI.

6. Using AI to Enable Workers

Before procuring AI systems and technologies, employers should consider how AI adoption will impact tasks, skills, opportunities, and risks. Assuming AI governance is in place, AI systems should have a pilot period before widespread adoption. To the extent the AI results in productivity gains or increased profits, employers may consider how capitalizing on this efficiency can increase worker pay.

7. Support Workers Impacted by AI

As with any technological innovation, AI has the potential to displace workers. Employers should prioritize training opportunities, reallocating workers displaced by AI, and upskilling their workforce.

8. Responsible Use of Worker Data

Privacy by design is imperative for the deployment of AI, including safeguards to secure and protect worker data. For example, if personally identifiable information, private information, or confidential information is collected by the AI system, there could be grave risks for internal and external threat intrusion.

Employers should also avoid collection, retention, or otherwise handling worker data that is unnecessary for a legitimate and defined business purpose. Employers should never share workers’ collected data outside the business unless the employees have given free and informed consent, and the third party has adequate security practices. Internally, access to the data collected by AI should be restricted by default, with access determined according to the least privilege principle.

The Future of DOL AI Guidance

Whether the incoming White House administration will change the direction and goals of DOL AI guidance remains to be seen. However, one thing is certain: AI in the workplace is here to stay.

The principles and best practices described here are critical for employers to realize productivity gains concurrently with the empowerment of their workforce. Additionally, adherence with this voluntary guidance will likely be beneficial for employers to anticipate emerging AI legal and regulatory requirements.

If you need assistance with AI-related issues in the workplace or otherwise, contact a member of the Woods Rogers Cybersecurity & Data Privacy team or Labor & Employment team.

Team

Jump to Page