Guidelines on generative AI with comments

The guideline for the use of generative AI in teaching and assessment came into effect on 20 January 2025. The guideline is included in the university's policies and regulations collection.

The guideline is designed in line with the fundamental principles for responsible AI development expressed by the Vice-Chancellor on the Vice-Chancellor's blog on 6 September 2024:

  • encourage responsible, ethical use and testing of generative AI tools,
  • support our staff in becoming AI literate,
  • assist students in using generative AI effectively and appropriately in their learning,
  • promote AI adaptation in teaching and examination,
  • ensure that information security, data protection, copyright, and academic integrity are maintained,
  • collaborate on best practices as the technology and its applications evolve.

The development of the guideline involved many complex considerations. This page describes the considerations made for each guideline

1. If the use of generative AI is restricted in teaching situations and/or assessments, this must be clearly indicated in written information. The restriction must be justifiable with respect to the intended learning outcomes, the nature of the task or educational considerations. In teaching situations and assessments, it must also be clearly indicated in written information if the use of generative AI must be reported.

The development of generative AI affects all teachers and students at the university in one way or another. The technology can affect everything from how education is conducted, how it is examined, to what needs to be taught. It is important and necessary that teachers and students explore how generative AI can, and should, be used on higher education's own terms and for its purposes.

There are also often good reasons to limit the use of generative AI in education, in relation to educational and assesment goals but also for educational reasons. What is appropriate can differ greatly both within and between courses.

Course responsible/examiners plan and carry out the teaching and assesment, and it is probably to them that the students primarily ask their questions. Then any restrictions on the current course must be justified.

For courses at PhD level, the supervisor and department are responsible for clearly informing in written form about the extent to which the use of generative AI either is restricted or must be reported by the doctoral student.

Information about restricting the use of generative AI must be available in written form, for example at the start of the course via the course's homepage in Studium. In connection with assesments, it is fine to use accepted wording such as "Permitted aids are...".

Within the framework of a program or a subject's combined course offerings, it is wise to work out a common approach to what role generative AI should play in teaching, and thus give individual teachers support when they in turn need to formulate instructions for a specific course. This can lead to course and program plans needing to be rewritten to add or clarify learning objectives.

A particular question concerns when and how the use of generative AI must be reported. A general requirement to always state when and for what purpose generative AI has been used quickly becomes unreasonable when even simple spelling programs can contain components of generative AI. Transparency requirements must therefore be designed for the contexts where they are relevant.

Today, many academic journals and organizations have far-reaching requirements for reporting on the use of AI. Especially in postgraduate education, it may be wise to adapt local rules to the scientific practice that develops in one's own subject.

In cases where unauthorized use of generative AI has occurred, these may become a disciplinary matter (see procedures for suspected deception in examination). If a student in an exam has not complied with the restrictions regarding the use of generative AI that have been set, it can be considered cheating, but it can be very difficult to prove. If a teacher suspects unauthorized use of generative AI in an examination, he may need to resort to alternative assessments/supplementary examination in order to determine whether a student has achieved the learning objectives. Great uncertainty regarding whether a student used generative AI in an examination can also form a basis for not approving an assignment.

2. Students planning to use generative AI in connection with teaching situations or assesments are responsible for keeping themselves informed of both central and, where applicable, local guidelines.

Students have a responsibility to find out what applies to the current education/course they are following. A student cannot assume that what applied to a previous course also applies to the current course. If the written instructions are perceived as unclear, it is the student's responsibility to ask for clarification.

3. If students are expected to use generative AI in teaching situations or assesments, course coordinators/examiners must be able to ensure that tools are made available at no cost to the student. If doctoral students are expected to use generative AI, the principal supervisor is responsible for ensuring that suitable tools are available at no cost to the doctoral student.

This requirement is based on already existing guidelines for students' working conditions, point 5.5. Students may not be required to pay for course literature, compendiums, computer licenses or other study materials. The university library must ensure that course literature is available at least as reference copies. Institutions must similarly ensure that other material, e.g. ex. compendiums, are available as reference copies. The same applies to the necessary computer licenses that must be installed on the campus computers.

Doctoral students are not covered by the student working condition guidlines, but should still have free access to the AI ​​tools required to participate in teaching and examinations.

4. A person who has made use of AI-generated material is responsible for how it is used. Usage must be guided by academic probity, a critical attitude to the reliability of the material and ethical considerations.

Whoever uses generative AI, i.e. a system that creates new material based on instructions from the user, has the ultimate responsibility for critically reviewing and assessing the quality and suitability of the generated material. Anyone who uses generative AI needs to develop a basic AI literacy in order to manage it wisely and responsibly. This applies to both teachers who choose to incorporate AI tools in teaching, students and doctoral students who choose to use the technology themselves. AI literacy includes basic knowledge about how the systems work, about their degree of reliability, their transparency regarding which sources are used when answers are generated, legal and ethical aspects, and not least acquiring an independent, critical approach to the material that is generated. The AI ​​tools are surrounded by many ethical and legal issues. They can display skewed or directly made-up answers (bias, or so-called hallucinations), which can contribute to the spread of misleading or completely incorrect statements. Generative AI can also provide answers that are of such a nature that, if not critically considered, risk discriminating and/or reproducing prejudices and generalizations about certain groups.

The use of data and copyrighted material underpinning their training poses risks to academic integrity, and the handling of personal and research data are other potentially problematic aspects to consider.

Likewise, sustainability perspectives, such as potentially widening economic gaps and power relations between different parts of the world, the working conditions of those working to train and "educate" the models, and the considerable energy consumption of the tools can be critically discussed.

Students, doctoral students and teachers can acquire a critical approach by using the tools in a structured, thoughtful way. A good starting point is to consider AI-generated material as raw material, which needs to be valued, sometimes discarded, and usually processed to contribute to one's own, independent reasoning and solutions.

The university's central support units offer training and other resources that address both departments and individual teachers and thus contribute to the development of AI literacy. Although most university degrees and courses do not have AI literacy as a learning objective, teaching is the primary place where students can be expected to develop AI literacy. The support offered by the University Library and the Language Workshop also plays a central role. At present, however, the development of AI literacy risks falling through the cracks because it is not included in general educational goals, course or program plans.

5. Sensitive personal data may only be transferred to generative AI systems approved by the University.

The same caution needs to be observed when using systems for generative AI that apply to other systems governed by existing data protection policies

6. Material produced by students may not be transferred without the students’ consent to generative AI systems that might use the material to train the systems.

Generative AI needs access to enormous amounts of material. The material that users upload to the tools is therefore often used to train the models. Much of this material has been and may be copyrighted. Whether this has been permitted or even is illegal is currently a controversial issue, which has also led to ongoing legal proceedings. Within the EU, the so-called The EU AI Act entered into force in August 2024, and there are provisions that, above all, impose greater transparency on AI companies about what material is used to train the models.

Other provisions of The EU AI Act appear to potentially affect those using the systems, i.e. ultimately teachers and students at UU. The exact interpretation of what this means is still not entirely clear, and a reformulation of the guideline may therefore be necessary.

Student unions expressed in the "remiss" process that it is important for students that material they produce within the framework of their education is not disseminated to systems for generative AI as the material may be/become subject to copyright protection. Pending the emergence of a clearer practice around the use of AI tools in teaching and examination in higher education, this limitation was judged to be a reasonable request from the student unions.

FOLLOW UPPSALA UNIVERSITY ON

Uppsala University on Facebook
Uppsala University on Instagram
Uppsala University on Twitter
Uppsala University on Youtube
Uppsala University on Linkedin