About ChatGPT and similar tools

The current debate has primarily focused on ChatGPT, but there are other generative AI tools available which are very easy to find: here are a few examples, but there are many more.

Of course, these tools were not developed first and foremost to enable cheating on homework. They much more often help owners of various websites, blogs etc. to find material and generate new texts, thus making it easier to keep their sites updated. They can also be used to review programming code, for example.

How do these AI tools work?

Since March 2023, ChatGPT uses GPT4, which is the latest version of a language model that has developed rapidly over the past few years. It’s important to remember that the tool is not a search engine, rather it generates text based on other texts on the basis of what linguistically matches. The quality of the instructions that you input into the model is crucial for the quality of the results. For example, you can state the scope of the answer, specify levels of style, clarify the question by stating that certain aspects should be left out and that certain perspectives are to be compared. The answers can also be obtained in different languages, with a fully comprehensible, cohesive text being delivered within the space of a few seconds. The results can be impressive. In particular questions that contain, and can be answered using more strictly regulated, formulaic text, such as programming code, mathematical formulae, etc., have a good chance of obtaining a good answer. Since the responses generated are new text, they also currently evade any text-matching systems, such as Ouriginal.

Limitations of the technology

It is therefore not necessarily facts being generated, but a text based on intra-linguistic probability. The process can also produce an answer containing entirely false statements, constructed to fulfil the requirements set out in the instructions. For example, if you pose a question and ask ChatGPT to include references to relevant sources in the answer, this could result in a list where the names, volumes and volume numbers of the journals are correct, but the specified articles are in fact missing when a check of the references is carried out. In other words, they are entirely invented and will not be found anywhere online either, while other references may prove to be correct.

These tools also have other limitations. They are naturally limited to the material in their databases, which means that printed material and digital sources that are not openly available will be missing, thus affecting the quality of the answers, likely more in some areas than others. ChatGPT currently lacks texts produced later than 2021, but this is not the case with other tools.

However, it is in the nature of self-learning systems to become better and better as they gain more and more users, for their databases to grow continuously, and for them to obtain some kind of feedback on the quality of their answers. For example, certain tools may already include valid, if inadequate, references. Developments are moving exceptionally fast, and we can expect them to become increasingly competent.

Ongoing development

The language models are continuing to develop. Currently, they are being integrated into the major search engines – e.g. Google (Google Search) and Microsoft (Bing). This will mean that the answers provided will be based on information from open web pages. This will allow for more specific searches and answers, including references to existing sources, though the reliability of the sources will always need to be checked. Even now we are seeing the beginnings of more powerful AI support in programmes such as Word, PowerPoint, Excel and so forth. Furthermore, there are improved tools for automatic subtitling of videos, and for real-time translation of what is being said during online meetings, and much more...

Is there a higher risk of cheating?

The majority of students do not cheat, and teachers’ focus during assessments is on assessing the students’ learning, not on attempts to detect potential deception. However, we know that attempts at deceptive conduct do occur during assessments. It is impossible to determine how common cheating through use of AI really is, but there have been confirmed cases at Swedish universities, including Uppsala. Awareness of various sites that tempt students to take prohibited shortcuts spreads rapidly.

In some respects, the situation is not a new one. It is not self-evident that easy access to AI tools should in itself lead to more cheating, but it may replace other forms of cheating. It is likely that ChatGPT – and several other similar tools – will no longer be free of charge soon (there is already a paid version), but there are people willing to pay for finished pieces of work. We have previously been able to benefit from the fact that Swedish is a minor language and has not been as interesting for operators offering finished texts, but the development of the AI tools has already started changing that situation.

As in the past, it is not always easy to determine where the line is drawn between a student's insufficiently independent approach to materials or a question and deliberate cheating. Herein lies perhaps the biggest challenge with the AI-based tools: students need to know how and when the tools can support learning and when they have to do without them in order to show what they have really learned.

A vital point is that the tools are currently becoming an integral part of the common search engines and programmes that we all use on a daily basis. The risk is thus that the line between what is permitted and not permitted in terms of written assignments will become increasingly blurred, and it is here that the University and individual teachers have a major responsibility to take preventive steps.


Text

FOLLOW UPPSALA UNIVERSITY ON

facebook
instagram
twitter
youtube
linkedin