White House “Voluntary” AI Pledge from Big Tech Signals Business Need for AI Framework

Article

Does your business have a framework to ensure safe, secure, and trustworthy artificial intelligence use by employees? Although federal law does not require businesses to have an AI framework,  on July 21, 2023, seven leading AI companies, voluntarily pledged to adopt the White House’s AI Commitment to manage AI risks to ensure safe, secure, and trustworthy AI. This voluntary adoption of responsible AI frameworks, by major tech companies OpenAI, Google, and Amazon, and others marks an important opportunity for businesses to follow suit. The White House AI Commitment emphasizes the need for federal AI regulation beyond the current patchwork of federal and state laws, beginning with a voluntarily adopted self-governing AI framework, even in states with less stringent data privacy laws.

Privacy and Cybersecurity Principles in White House Voluntary AI Commitment

The three principles of safety, security, and trust in the White House AI Commitment, are divided into eight commitments several of which aim to prioritize individual data privacy and mitigate cybersecurity risks posed by business AI use, including:

  • Prioritizing research on societal risks to avoid harmful bias and discrimination, and protect privacy;
  • Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights (certain parameters of an AI model); and
  • Incentivizing third-party discovery and reporting of cybersecurity issues and vulnerabilities.

These principles, along with the White House’s Blueprint for an AI Bill of Rights, suggest that adoption of a responsible AI framework would demonstrate a commitment to individual privacy for customers and employees and help mitigate potential cybersecurity risks. For example, inaccurate or biased input data used by employees could result in sub-optimal or discriminatory business decisions. Employees may share trade secrets and proprietary information with third party AI platforms, which in turn could be considered no longer proprietary. If input data containing sensitive personal information on employees or customers is compromised, reputational damage could occur or substantial liability could arise under existing laws. A responsible AI framework for your business can help mitigate these and other privacy and cybersecurity risks.

Businesses Should Carefully Craft Their AI Framework, as it Could be Enforceable by the FTC

Despite the “voluntary” nature of the White House AI Commitment, the Federal Trade Commission could enforce such an AI framework as an unfair or deceptive trade practice. In the wake of Amazon’s pledge, Amazon has publicly touted its commitment to the responsible use of AI. Could this “voluntary” pledge to responsible AI change become mandatory?

For years, the FTC has taken action against businesses for privacy policy violations under Section 5 of the FTC Act, resulting in massive fines and consent decrees. In fact, the FTC has already demonstrated a willingness to police AI practices. For example, the FTC recently opened a consumer protection investigation against OpenAI, in part for potential unfair or deceptive privacy or data security practices related to AI.

Based on the complicated nature of AI frameworks for businesses, and possible enforcement by the FTC, businesses should be careful in designing AI frameworks, implement cross-department collaboration, and consider data privacy and cybersecurity risks.

If you have questions or concerns about your business’s use of artificial intelligence, please contact a member of the WRVB Cybersecurity & Data Privacy practice team.

The author thanks 2023 WRVB Summer Associate Rafael Hernandez, a student at American University Washington College of Law, for his co-authorship, research, and insightful contributions to this article.

Team

Jump to Page