Several States Move to Restrict Public Colleges’ Use of AI Proctoring Tools
Boston, Mass. – A new wave of state-level action is reshaping how public colleges can use artificial intelligence–based exam proctoring, after lawmakers in several states advanced bills over the past three days that would restrict or ban certain automated monitoring tools on privacy and civil rights grounds.
The proposals, introduced or moved forward in Massachusetts, Illinois, and Washington, reflect growing concern among legislators about the use of facial recognition, behavioral analytics, and biometric data in higher education assessment. While similar bills have surfaced in prior sessions, the latest measures are notable for their scope and for the speed with which they have advanced through committees, according to reporting by Inside Higher Ed and Higher Ed Dive.
Recent Developments
In Massachusetts, a joint committee on higher education voted this week to advance legislation that would prohibit public colleges from requiring students to use AI-powered remote proctoring systems that rely on facial recognition or continuous video monitoring, unless institutions can demonstrate no reasonable alternative exists. The bill would also require explicit student consent and mandate annual reporting on any approved use.
Illinois lawmakers, meanwhile, moved a proposal out of committee that would extend the state’s Biometric Information Privacy Act to cover AI proctoring vendors working with public universities. Under the bill, institutions could face liability if vendors collect or store biometric data without meeting strict disclosure and retention requirements, as reported by Reuters.
In Washington state, a Senate bill introduced late last week would require public colleges to conduct civil rights impact assessments before deploying automated proctoring tools, citing concerns about disparate impacts on students of color and students with disabilities.
Context and Background
AI-based exam proctoring expanded rapidly during the COVID-19 pandemic as institutions shifted to remote instruction. Vendors marketed the tools as a way to preserve academic integrity in online courses, but students and advocacy groups have long criticized them for invasive surveillance and algorithmic bias.
Several universities, including some large public systems, have already scaled back or abandoned AI proctoring after student protests and internal reviews, according to coverage by The Chronicle of Higher Education. Until now, however, most decisions have been left to individual institutions rather than mandated by law.
Implications for U.S. Higher Education
If enacted, the new state laws could significantly limit the use of automated proctoring at public colleges and universities, particularly in high-enrollment online courses. Institutions may need to invest in alternative assessment models, such as project-based evaluations or increased human proctoring, which could raise instructional costs.
The proposals also add compliance complexity for multi-state public university systems and for private vendors operating nationally. Higher education attorneys told Politico that differing state standards could accelerate a shift away from biometric-based tools altogether.
What Comes Next
The bills now head to full legislative chambers, with votes expected later this winter. Even if some measures stall, policy analysts expect continued scrutiny of AI surveillance technologies in higher education, especially as more states revisit data privacy laws in 2026.
For colleges, the immediate challenge will be balancing academic integrity, accessibility, and legal risk as the regulatory environment around educational AI grows more fragmented.