Opportunities and Challenges
Generative AI is already being used by students, teachers, and researchers for various purposes. Here is a brief overview of the most commonly cited general advantages and disadvantages of using generative AI in higher education. Together, they can provide a quick entry into the discussion about the opportunities and risks of generative AI.
Advantages of Generative AI
Efficiency
Generative AI can perform certain types of time-consuming work much faster than humans: we can save a lot of time. Various tools offer the possibility to, for example:
- Quickly summarise the content of long and/or many documents in a structured way
- Assess students’ responses to written assignments based on set criteria and formulate feedback on them
- Create formal meeting minutes from meeting notes
- Draft text for standard emails to students
- Find references that support the argumentation in a text
Individualisation of Teaching
Students’ activities on the digital platforms where much of their teaching takes place, where the material they produce is collected over time, and where their results are recorded can be analysed at an individual level through advanced Learning Analytics. Based on this, generative AI can automatically suggest relevant support resources and interventions and/or in-depth material. In this way, the university can offer increasingly better support for more individualised study paths, with both improved quality and throughput as the expected result.
Creativity
With tools like ChatGPT, one can have a conversation where ideas are thrown out, feedback is received, objections or clarifications are made, etc. AI can, for example:
- Act as a sounding board to discuss possible solutions to a problem or to overcome writer’s block and get started with a text
- Provide suggestions for the structure and content of a lecture, as well as create an accompanying PowerPoint presentation
- Create images that illustrate your presentation
- Provide text suggestions for a memo, report, or similar, complete with references
Text Processing
You can also use AI to process your own texts. Upload a text and:
- Ask AI to perform a spelling and grammar check
- Ask AI to simplify an advanced text so that it is better understood by someone with less prior knowledge of the subject
- Ask AI to translate the text into English (or another language)
Necessary Adaptation
For various reasons, it may be considered inevitable to use AI. One could argue that:
- For a curiosity-driven university, it may seem natural to test AI tools
- It is no more dramatic than using a calculator - controversial at one time, but quickly normalised
- Everyone is doing it - UU’s education should not fall behind
- Various types of generative AI will become, or are already, commonplace in many of the professions students are aiming for
- We must realise that we will never be able to convince students to spend a long time producing texts that they can generate in a few minutes with AI
Disadvantages of Generative AI
Unreliability
A fundamental objection concerns the quality of the answers generated by the various language models, which all suffer from weaknesses to varying degrees, such as:
- Bias - the selection of training data is often skewed, and thus all generated answers risk, for example, disadvantaging certain language areas or reproducing various cultural stereotypes present in the training data. In the worst case, the answers can systematically contribute to reinforcing misconceptions and inaccuracies and confirming one-sided or distorted views.
- Hallucinations - as it is called when AI tools simply produce completely fabricated facts and false statements. They are undoubtedly capable of sometimes lying very confidently and convincingly (without intending to lie, but solely to be able to provide an answer to a posed question).
- Inconsistency - AI tools are accommodating and provide (with certain limitations) answers to exactly what you ask them. If you ask them to come up with an argument that disproves a previous answer, the tools will also assist with that. They cannot be trusted!
Generative AI thus places high demands on users’ source-critical competence!
Efficiency - Why and at What Cost?
Without a doubt, time can be saved, but:
- The quality of the answers must always be assessed, and how will one learn a critical approach if one constantly skips the learning process that the evaluation and selection of sources for an argument entails, not to mention the intellectual processing that the formulation of one’s own arguments involves?
- Even if experiments have shown that AI can provide feedback at the level of experienced teachers, how will new teachers ever become experienced assessors? Is it really likely that they will have time to review AI feedback? And how does it affect the students’ learning environment in the long run if teachers hand over more and more to AI?
- Despite the fact that much is written about saving time, very little is said about what is done with the time saved. In the corporate or administrative world, one can naturally see simple advantages with increased productivity - but what does it mean in academia? Is the time spent on more research (which can also be made more efficient with AI)? Or is the time spent on teaching, perhaps to enable more meetings between teachers and students? Or are teachers laid off? And who decides what to do? The answers to these questions are more interesting than the statement that time can be saved.
How Creative Is It?
In many areas, important parts of the intellectual processing of a question often occur during the actual work of producing a well-formulated and well-structured text. The use of generative AI can lead to students missing out on an important practice moment.
Difficulties in Assessing Students’ Knowledge
A finished text of good quality has previously primarily been a sign that students have undergone a learning process with independent intellectual processing of material based on a problem statement. It is actually the implementation of that process that is examined in teaching. From a teacher’s perspective, texts and arguments that are increasingly generated by AI risk no longer providing a basis for fair assessment.
Refraining from Help
When it comes to simple language checks, it may rarely be necessary (except for certain tasks, e.g., in language subjects) - but students need to take the time to understand why, and if, linguistic or structural suggestions from AI are really better and even necessary, or if they prefer to ignore them. Will they have the time and energy to do that? Properly used, AI can be a powerful tool for mastering and developing one’s own language, but there is a risk that many students routinely accept all AI suggestions and in the process become linguistically poorer.
Fear of “Missing the Train”
has always been a poor argument for starting to use technology in educational contexts, as opposed to having good pedagogical reasons or doing it out of curiosity-driven interest.
Leaving calculations to a calculator is not the same as leaving the formulation of texts to an AI tool. Linguistic expressions are more fundamental to how we interpret, understand, and relate to the world, and the variation of different text types of different significance is enormous. Naturally, one does not always and for all types of texts need to attach equal importance to who formulated them, but the simple, sweeping analogy with the calculator is simply misleading.
That said, it is undoubtedly the case that all teachers need to know about AI tools and how they work, because students are already using them and it may need to have important consequences for how teaching and assessment are designed. And of course, education should prepare students for a professional life where AI plays an important role. But it is only from that analysis that a teaching staff can agree on what should apply to their courses and how it should be communicated to students. Anxiously starting to use generative AI primarily to avoid being outdated is not a good idea.
Ethical and Legal Issues
Finally, there are a few more problem areas that one may need to consider as a prospective user of generative AI. It is worth noting that these aspects are often raised by students as worrying factors.
- What happens to academic integrity when students (but also teachers and researchers) can let AI tools search for sources, summarise them, suggest a disposition, and generate the article text? What still constitutes independent intellectual work? Where do we draw the line against cheating and fraudulent behaviour? And what transparency regarding AI use is needed to determine that?
- There is also criticism, not least from the global south, of a colonial approach to data. The large language models that exist have primarily been developed by private, commercial companies in North America and Western Europe. However, they have been trained on data collected from around the world, often without any specific permission being obtained. The AI services are then sold and generate profit for the companies. Not everyone finds this reasonable.
- An adjacent fairness aspect concerns how AI tools are “raised.” All major players equip their tools with various barriers to prevent them from generating content that is illegal or otherwise inappropriate. The systems, for example, refuse to answer questions about bomb-making, generate content that can constitute incitement to hatred, or images that show deeply offensive content. The necessary, demanding, and time-consuming work of reviewing material that should not be used to generate inappropriate responses is largely carried out by low-paid labour in the global south, while the users who benefit from it are still largely in the USA and Western Europe.
- A fourth factor concerns sustainability aspects: generative AI consumes large amounts of energy compared to traditional web services. As language models grow and the number of users increases, more and more electricity is required to generate texts, images, videos, etc.
- A final factor concerns intellectual property rights. In the USA, legal processes are already underway where copyright holders (visual artists, authors, etc.) seek retroactive compensation for their works being uploaded as training data to various language models. It is also a growing issue elsewhere and the subject of legislation at the EU level. The legal situation is still unclear - but there may be reason to refrain from uploading copyrighted material to AI tools without permission.