Sweeping AI Legislation Under Consideration in Virginia
Virginia, a leader in technology and privacy related regulations, is methodically examining artificial intelligence legislation. In particular, significant legislation establishing a regulatory framework for high-risk Artificial Intelligence (AI) systems is currently being considered by the Virginia General Assembly’s Joint Commission on Technology and Science (JCOTS). JCOTs – a permanent legislative agency that studies and develops technology and science related policies in Virginia – has held several hearings on the topic in an effort to hear expertise related to AI issues and has formed an AI specific Subcommittee. The JCOTS AI Subcommittee is considering two pieces of legislation that would govern the use of high-risk AI systems by public entities and private sector entities.
Virginia has been active in establishing rules and guidelines for the use of AI tools and applications. For example, in January 2024, Virginia Governor Glenn Youngkin signed Executive Order 30 on Artificial Intelligence (EO 30), which established “important safety standards to ensure the responsible, ethical, and transparent use of AI by state government.” EO 30 contained provisions impacting the adoption and use of AI technologies by state agencies, K-12 schools, colleges and universities, and law enforcement. In addition, the AI governance policies and standards in EO 30 impact third parties working with the Commonwealth, such as businesses, suppliers, and contractors.
The proposed legislation would go further by codifying formal rules and regulations pertaining to the use of high-risk AI systems. Please note, as of the date of this post, the bills remain in “working draft” form and are subject to change as they make their way through the legislative process. Virginia’s legislative process – a short part-time General Assembly that occurs in the beginning of the year – means that if the bills move forward, they could move forward rapidly. This is what the world saw when Virginia introduced its consumer data privacy act. If the bills make it through the General Assembly, Virginia will join Colorado as the second state to pass an AI regulatory framework. Considering Virginia’s rapid legislative timelines, businesses and public bodies would be wise to take note of the proposed bills and their frameworks now.
Overview of Virginia AI Legislation
As mentioned, the JCOTS AI Subcommittee is currently considering two pieces of significant AI legislation. Delegate Michelle Maldanado drafted the “High-Risk Artificial Intelligence Developer Act” (pdf) which is intended to regulate private sector use of certain AI tools and applications. Senator Lashrecse Aird drafted AI legislation (pdf) that would regulate the use of certain AI systems by public entities. Both bills primarily focus on regulating the use of “high-risk” AI systems, which are defined as “any artificial intelligence system that is specifically intended to autonomously make, or be a substantial factor in making, a consequential decision.”
A “consequential decision” is defined as “any decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer of, or the cost or terms of” the following:
- Education enrollment or an education opportunity;
- Employment or an employment opportunity;
- A financial or lending service;
- An essential government service;
- Health care services;
- Housing;
- Insurance; or
- A legal service.
It appears one of the primary objectives of the current pair of AI bills is to mitigate the risk of an AI system engaging in "algorithmic discrimination." Under the proposed legislation, “algorithmic discrimination” is defined as “any discrimination that results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, sexual orientation, veteran status, or other classification protected under state or federal law." In effect, both bills want to ensure humans remain actively involved in consequential decisions impacting the livelihoods of Virginia residents and can review (and correct) any indicators of AI bias or discrimination.
It is worth noting that an AI system or service is not considered a "high-risk” AI system if it is intended to:
- Perform a narrow procedural task;
- Improve the result of a previously completed human activity;
- Detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without sufficient human review; or
- Perform a preparatory task to an assessment relevant to a consequential decision.
There are also several exempted AI tools and applications under the proposed legislation. For example, the bills state there will be a “rebuttable presumption” that the following technologies would generally not be considered a high-risk AI system:
- Technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful (i.e., ChatGPT, Google Gemini, etc.);
- Anti-fraud technology that does not use facial recognition technology;
- Anti-malware technology and anti-virus technology;
- AI-enabled video games;
- Calculators and spreadsheets
- Cybersecurity technology;
- Databases, data storage;
- Firewall technology;
- Internet domain registration and website loading;
- Networking;
- Spam and robocall filtering;
- Spell-checking technology;
- Web caching; and
- Web hosting or any similar technology.
Developers, Integrators, and Deployers of High-Risk AI Systems
The proposed legislation contains regulatory requirements for “Developers,” “Integrators,” and “Deployers” of high-risk AI systems, which are defined as follows:
- Developer of High-Risk AI System: Any person doing business in the Commonwealth that develops or intentionally and substantially modifies a high-risk artificial intelligence system that is offered, sold, leased, given, or otherwise provided to consumers in the Commonwealth.
- Integrator of a High-Risk AI System: A person that knowingly integrates an artificial intelligence system into a software application and places such software application on the market. The definition of "Integrator" does not include a person offering information technology infrastructure.
- Deployer of a High-Risk AI System: Any person doing business in the Commonwealth that deploys or uses a high-risk artificial intelligence system to make a consequential decision.
Regulatory Requirements Imposed on High-Risk AI Developers
Under the current iteration of Virginia’s AI legislation, “Developers” of high-risk AI systems would be obligated to “make available” a set of disclosures and documentation to “” of a high-risk AI system. This set of disclosures and documentation must include a general statement describing the “intended uses” of the high-risk AI system. In addition, the following types of documentation must be ready for disclosure:
- Summaries of the types of data used to train the high-risk AI system;
- Any known or foreseeable limitations of the high-risk AI system, including risks of algorithmic discrimination from intended uses;
- The purpose and intended benefits and uses of the system;
- How the system was evaluated for performance and mitigation of algorithmic discrimination;
- Data governance measures used to cover the training datasets and to examine the "suitability" of data sources, biases and mitigation;
- The intended outputs of the high-risk AI system;
- How the high-risk AI system should be used, not be used, and monitored by an individual when used to make (or as a substantial factor in making) a consequential decision; and
- Any other information necessary to assist a deployer in understanding the system and risks for algorithmic discrimination.
If a “Developer” of a high-risk AI system performs an “intentional and substantial modification” to the AI system, then they must update their disclosures and documentation “no later than 90 days” after the modification.
Despite a myriad of new disclosure requirements, the proposed legislation expressly states that nothing in the proposed law “shall be construed to require a Developer to disclose any trade secret.”
Regulatory Requirements Imposed on Deployers of High-Risk AI Systems
Like “Developers” of high-risk AI systems, “Deployers” of such systems would be required to meet an array of compliance obligations under the proposed AI legislation. Notably, “Deployers” of high-risk AI systems would be required to implement a risk management policy and program that:
- Specifies the principles, processes, and personnel that the deployer shall use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk AI system to make a consequential decision;
- Aligns with existing standards (e.g., the National Institute of Standards and Technology's AI Risk Management Framework and/or the International Organization for Standardization's ISO 42001); and
- Is regularly reviewed and updated.
In addition, “Deployers” of high-risk AI systems must complete an impact assessment before the high-risk AI system can be used to make or influence consequential decisions. Each impact assessment must include the following:
- A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk AI system;
- A description of the categories of data the high-risk AI system processes as inputs and the outputs such high-risk AI system produces;
- Whether the deployment or use of the high-risk AI system poses a reasonably foreseeable risk of algorithmic discrimination;
- Whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the Developer's intended uses of such high-risk artificial intelligence system;
- A list of any metrics used to evaluate the performance and known limitations of the high-risk AI system;
- A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and
- A description of any post-deployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise.
Required Transparency and Disclosures to Virginia Consumers
Under Virginia’s proposed AI legislation, all those involved in the development and use of high-risk AI systems will be required to proactively disclose the rationale behind adverse consequential decisions. This disclosure must include the degree to which the high-risk AI system contributed to the consequential decision, the type of data that was processed by the high-risk AI system in making the consequential decision, and the sources of such data.
Consumers would then be afforded the opportunity to correct inaccuracies or appeal any such adverse decisions. In addition, Deployers, Integrators, and Developers of high-risk AI systems would be required to provide users with public-facing disclosures when interacting with such AI systems.
Similarities and Differences to Colorado’s AI Law
In May 2024, Colorado became the first state in the United States to enact comprehensive AI legislation with the passage and signing of the Colorado AI Act (pdf). The landmark legislation is expected to go into effect in February 2026. Virginia’s AI legislation would go into effect in July 2026, presuming the current iteration passed and was signed into law.
Virginia’s AI legislation shares many similarities with the Colorado AI Act. For example, much like Colorado, the framework proposed in Virginia’s AI bills focus on regulating Developers and Deployers of high-risk AI systems. Like Colorado, the compliance obligations for Developers and Deployers include detailed disclosures about a high-risk AI system (e.g., the intended use of the AI system, any reasonably known limitations, measures taken to mitigate reasonably foreseeable risks of algorithmic discrimination, and so forth). Also, like Colorado, enforcement authority under Virginia’s AI bills would be vested in the Office of Attorney General. No private right of action would be afforded to consumers.
A notable departure in Virginia’s AI bills from the Colorado AI framework is the inclusion of “Integrators” of high-risk AI systems. This concept does not exist under the Colorado law, nor in the European Union’s AI Act. Under the Virginia legislation, an Integrator of a high-risk AI system will be required to “develop and adopt an acceptable use policy, which shall limit the use of the high-risk artificial intelligence system to mitigate known risks of algorithmic discrimination.” In addition, an Integrator of a high-risk AI system must provide to the Deployer of such a system a “clear, conspicuous notice” that contains the following:
- The name or other identifier of the high-risk artificial intelligence system integrated into a software application provided to the Deployer;
- The name and contact information of the Developer of the high-risk artificial intelligence system integrated into a software application provided to the Deployer;
- Whether the Integrator has adjusted the model weights of the high-risk artificial intelligence system integrated into the software application by exposing it to additional data, a summary of the adjustment process, and how such process and the resulting system were evaluated for risk of algorithmic discrimination;
- A summary of any other non-substantial modifications made by the Integrator; and
- a copy of the Integrator's acceptable use policy.
Another notable distinction between Virginia’s AI legislation and the Colorado AI Act is the fact that the Commonwealth is considering two bills simultaneously that would differentiate regulatory requirements between public sector Developers, Integrators, and Deployers of high-risk AI systems versus private sector entities.
What Virginia Businesses Can Do To Prepare
Although Virginia’s AI legislation remains in “working draft” form and is subject to change, Virginia entities that have developed, integrated, or deployed high-risk AI systems (or are planning to do so) would be well served to review the requirements contained in the two bills and consider outlining a robust governance and compliance program. Taking proactive measures could help your organization strengthen its compliance posture, in the event these bills make their way through the legislative process and are signed into law.
The general regulatory framework proposed under Virginia’s AI bills could help shape and influence how entities develop and deploy certain types of AI systems. Steps that your organization may want to consider include:
- Develop a statement of the purpose, intended use cases and benefits afforded by the high-risk AI system.
- Consider implementing the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology, Standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for AI systems. Why? The high-risk AI systems that conform to these standards “shall be presumed to be in conformity with related requirements” in Virginia’s AI legislation.
- Start drafting documentation that will be necessary for Developers, Integrators, and Deployers. At the very least, consider establishing a protocol for how such documentation will be drafted and maintained.
- Start creating the necessary infrastructure (i.e., principles, processes and personnel) to conduct AI system impact assessments, annual high-risk AI system reviews, public-facing consumer disclosures, and to report any indicators of algorithmic discrimination within a high-risk AI system in a timely fashion.
- If your organization has an AI system that appears to fall under a particular exemption, consider the documentation that may be necessary to establish and substantiate the application of the exemption.
If you are looking for guidance on how to develop a compliance program for a potential AI regulatory framework in the Commonwealth, please contact a member of the Woods Rogers Cybersecurity & Data Privacy practice team.
Team
- Of Counsel
- Principal | Cybersecurity & Data Privacy Practice Chair