Regulations of AI - Laws and Guidelines

In a rapidly growing and challenging field like AI, it can be difficult to know what is allowed and appropriate, and what is considered prohibited or inappropriate. Many teachers and students have not had the time to think through and discuss the implications for education. No clear practice has yet been developed, and the applicability of older legislation or regulations may not be easy to determine.

The situation is naturally unsatisfactory for both teachers and students, who desire greater clarity.

In three areas, the development of AI has been particularly controversial in higher education: academic integrity; GDPR-related issues of personal privacy; and copyright issues.

Academic Integrity

Presenting work as one’s own, when it has actually been done by someone else, is a fundamental breach of academic integrity. In educational contexts, it is considered misleading during examinations. Here, various types of generative AI have contributed to the emergence of a grey area, where students often do not know where the line is between what is considered cheating and what is legitimate use of AI tools.

Different universities have reacted differently to the development. A few have temporarily banned their students from using generative AI altogether, which is likely unsustainable. More commonly, central and/or local guidelines are established to determine how AI may or may not be used in different situations. A common requirement may be transparency, i.e., that the extent of such use must always be declared clearly and openly. An international, European forum working on these and other issues is The European Network for Academic Integrity (ENAI).

Guidelines are undoubtedly desirable for the sake of clarity, but most importantly, the academic core values they protect should shape the students’ learning environment throughout their education.

Personal Privacy

The amounts of data handled by AI systems, which can also include potentially sensitive personal data, must be managed responsibly. The various tools naturally have their user terms that explain how personal data is handled, but there is also legislation that sets the framework for what is allowed. In the EU, the EU AI Act came into force in March 2024. Its requirements are primarily directed at the actors developing the tools, but also at the organizations making them available to their users, such as universities.

The EU AI Act completely prohibits certain uses of AI, especially when it infringes on personal privacy, and classifies other AI into different risk groups. The strictest regulations apply to a high-risk group, where specific requirements must be met. It can be noted that systems related to “Evaluating learning outcomes, including those used to steer the student’s learning process. Assessing the appropriate level of education for an individual. Monitoring and detecting prohibited student behaviour during tests” are classified in the high-risk group. For example, if one wants to develop advanced so-called learning analytics using AI, it is necessary to check how far it is possible to go.

The EU AI Act will undoubtedly have consequences for Swedish higher education, and hopefully, universities will soon gain greater clarity on what this may entail. In the meantime, users should follow existing regulations for handling personal data, and, for example, never upload students’ personal or contact information to various AI tools.

A significant portion of the training data used to train language models has been copyrighted material - this applies to texts, images, and other media. Actors have generally not sought permission to upload this material, even though the language models would not have developed into the commercially successful systems they have become without access to it.

In the USA, this has already led to legal cases, where, among others, visual artists have sued companies for copyright infringement, while companies often claim so-called fair use. In the EU, a stricter approach is being established. The EU AI Act, among other things, stipulates requirements for greater transparency from the various platforms about what material has been used to train the models. The situation regarding possible compensation to copyright holders is still so unclear that it may be wise to be cautious about uploading protected material to AI tools.

Read More!

Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators (EC 2022)

Regulating AI in education roadmap (COE, 2023)

What you need to know about UNESCO’s new AI competency framworks for students and teachers (UNESCO 2024)

ENAI Recommendations on the ethical use of Artificial Intelligence in Education

The EU AI Act

Generative AI and Copyright Law

About copyright on AI-generated images

Suggestions for the design of local guidelines for the use of generative AI by students at UU


FOLLOW UPPSALA UNIVERSITY ON

facebook
instagram
twitter
youtube
linkedin