Legal Implications of Using AI as an Exam Invigilator

Universities around the globe have been profoundly affected by stay-at-home orders, which have required them to close their doors and shift to online teaching and learning. In an effort to avoid delaying or postponing examinations amid the Covid-19 outbreak, many higher-education institutions have turned to online proctoring tools, raising complex questions about how they can ensure the integrity of online assessments while at the same time respect ethical and legal constraints, especially regarding students’ fundamental rights to privacy, data protection and non-discrimination. In particular, universities are increasingly relying on AI-based facial recognition technologies (FRT) that can be used to authenticate remote users that connect from offsite the campus as well as to identify cheating and other dubious behavior throughout the online exam process.

An article by Liane Colonna

The use of AI-based proctoring systems in higher education raises questions concerning the surveillance effect of these tools which may make students feel as though they are being watched under a constant microscope as they take online exams in their homes, traditionally a place afforded a high-level of legal protection. Not only can intensely personal information about a student such as their lifestyle choices and socio-economic status be revealed in the home setting, but online assessment tools may also make students feel like cheaters, even before submitting any work, greatly increasing their already potentially high, test anxiety. Ultimately, the surveillance capabilities of AI-based proctoring tools may create a serious lack of trust and cooperation between the students and the institution, negatively impacting the teaching and learning process. While institutions insist that these tools are necessary to fulfil the requirements of distance education and to ensure the integrity of the exams, students raise legitimate concerns about whether universities have lawful grounds to process their personal data, particularly when their consent is not provided.

There are further concerns about the technical and social biases that can be embedded into the algorithms that fuel the technology behind AI-based proctoring tools, leading to marginalized students disproportionately and unfairly having to pay the price of these technologies, based on sexist, ableist, and heterocentrist norms reflected in the systems. Groups at risk for discrimination by proctoring systems include women; students of color; students with accessibility needs; students with learning disabilities, neurodivergence, and anxiety; low-income and rural students; and transgender students. This is because of biases that can, for example, make FRT better at detecting light-skinned people than dark-skinned people, and better at detecting men than women.

As it currently stands, the legislative landscape in the EU associated with FRT in the Higher Education (HE) context is highly complex and constantly evolving. Legal and ethical obligations are reflected in a number of binding legal instruments as well as in soft law and proposed legislation. Currently, there is no direct specific legal regime applicable to biometric data, other than the GDPR. That said, there is a highly developed human-rights framework (e.g. Council of Europe. (1952). European Convention for the Protection of Human Rights and Fundamental Freedoms, Europ.T.S. No. 5; 213 U.N.T.S. 221 (November 4, 1950) governing the fundamental rights to privacy, data protection and non-discrimination as well as an emerging regime which will govern high-risk AI like the use of FRT in the higher-education context.

Indeed, FRT is a central concern of the proposed AI Regulation as it expressly prohibits the use of “real time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement unless certain limited exceptions apply in Article 5(1)(d). Outside of being used for law enforcement purposes in publicly accessible spaces, Recital 33 and Annex III(1)(a) explain that “real-time” and “post” remote biometric identification systems should be classified as high-risk. Referring explicitly to the educational sector, Annex III(3)(b) states that AI systems used for “assessing students in educational training” constitutes high risk AI. Annex III(3)(a) also refers to “AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions.” Recital 35 clarifies that the reason for this is that they “may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood.” It is unclear whether Annex III’s reference to “assessing students in educational training” refers to using AI to facilitate remote proctoring systems used to provide online assessments of students or whether it refers to using AI to literally assess – or score – students, through for example, some kind of grading software. Where an AI system is deemed to be high-risk, then providers and users of these technologies will have an extensive range of obligations, many of which must be performed ex ante. Article 71 explains that regulators will be able to fine non-compliant actors up to €30m, or 6% of their worldwide turnover.

Consistent with the definition set forth in the General Data Protection Regulation (GDPR), the regulation makes a sharp distinction between identification and verification techniques, placing stricter rules on the former, and essentially placing AI used for verification purposes outside the scope of high-risk AI all together.

Ultimately, the legality of processing under the GDPR is a fundamental question when it concerns AI-based proctoring tools. The classification of data used by proctoring exams as merely personal data and subject to one of the Article 6 grounds for lawful processing or as sensitive and subject to one the Article 9(2) exceptions is of critical relevance. If the data is classified as mere personal data then privately funded universities may be able to set forth arguments that they have a lawful ground to utilize online proctoring tools based on their “legitimate interest” to, for example, fulfil the requirements of distance education by securely organizing remote exams. Likewise, publicly funded universities may be able to set forth arguments based on their position that the processing is necessary for the performance of a task carried out in the public interest such as their legal obligation under the law to administer exams, award degrees and make efforts to prevent fraud/ensure the quality of the education in doing so. Online proctoring will likely be found to be disproportionate after universities open up again and exams can be held in halls because of the existence of a less privacy intrusive alternative.

If data collected from a proctoring system are classified as sensitive data, then it should not be processed unless one of the ten exemptions from the prohibition of processing biometric data found in Article 9(2) apply. Relying on consent as a valid ground to process personal data in connection with online proctoring is generally not possible because of the power imbalances and the hierarchical relationship that exists between the students and the teachers representing the university (see the opinion of the Swedish Authority for Privacy Protection in the Skellefteå Municipality case). There may be limited situations where Article 9(2)(g) can be invoked, particularly during the pandemic situation (see the opinion of the Danish Data Protection Agency in the IT University case).

The question of which legal basis is appropriate in a specific situation when utilizing AI-based online proctoring will always depend on the circumstances, the concrete purpose for the use of the software and the type of data being processed. It is of utmost importance that the educational institution is able to justify its choice of a particular legal basis. In addition, the processing of personal data must be necessary and proportionate to achieve the underlying purpose.

Relevant factors for determining whether the use of AI-based proctoring tools are necessary and proportionate to achieving their aims

This blog post is based on the forthcoming article: Legal Implications of Using AI as an Exam Invigilator in: 2020-2021 Nordic Yearbook – Law in the Era of Artificial Intelligence (eds. Liane Colonna and Stanley Greenstein)(Stockholm, SJFs Open Access Litteratur) which you can access on SSRN here.

The support of The Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS), Ethical and Legal Challenges in Relationship to AI-driven Practices in Higher Education (MMW2020.0138), is gratefully acknowledged.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Liane Colonna

    Liane is currently employed as a researcher at the The Swedish Law and Informatics Research Institute (IRI) where she is performing work in the Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS). The project she is involved with is called, “Ethical and Legal Challenges in Relationship to AI-driven Practices in Higher Education.” Liane is also the Action Vice Chair of the COST Action entitled “Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living”(”GoodBrother”). Furthermore, she is a Co-Principal Investigator (PI) of a Marie Skłodowska-Curie Actions Innovative Training Network entitled Privacy-Aware and Acceptable Video-Based Technologies and Services for Active and Assisted Living (“visuAAL”).

Liane Colonna Written by:

Liane is currently employed as a researcher at the The Swedish Law and Informatics Research Institute (IRI) where she is performing work in the Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS). The project she is involved with is called, “Ethical and Legal Challenges in Relationship to AI-driven Practices in Higher Education.” Liane is also the Action Vice Chair of the COST Action entitled “Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living”(”GoodBrother”). Furthermore, she is a Co-Principal Investigator (PI) of a Marie Skłodowska-Curie Actions Innovative Training Network entitled Privacy-Aware and Acceptable Video-Based Technologies and Services for Active and Assisted Living (“visuAAL”).