Back Mike Sharples: “We have to help students with technology, not against technology”

Mike Sharples: “We have to help students with technology, not against technology”

Mike Sharples is one of the world’s leading experts in technology applications in education, as well as tools developed with artificial intelligence.  On 24 January, Sharples, emeritus professor of Educational Technology in the United Kingdom, delivered the conference “Plagiarism past, present and future: implications for assessment” during the last session of the “EDevolution Research” series, held in the auditorium of the UPF Poblenou campus.

09.02.2023

Imatge inicial

Mike Sharples is an emeritus professor of Educational Technology at the Institute of Educational Technology of the Open University of the United Kingdom. He is the author of more than 300 books and papers in the areas of educational technology, science education, human-centred design of personal technologies, artificial intelligence and cognitive science. His research fields also include research on new technologies and learning environments.

He is also the author of a series of reports entitled Innovative pedagogy, and last year he published the book Story Machines: How Computers Have Become Creative Writers, together with Rafael Pérez, on which his recent article “Automated Essay Writing: An AIED Opinion” is based.

Within the framework of the growing social debate around the implications of artificial intelligence, also in the spheres of education and the university, Sharples participated last Tuesday, 24 January, in the session of the “EDevolution Research” season entitled “The evolution of plagiarism and the role of artificial intelligence: times to rethink assessment”. During the session, held in the auditorium of the Poblenou campus, he delivered the lecture “Plagiarism past, present and future: implications for assessment”. Afterwards, we had a chance to talk to him for a while.

Plagiarism has always existed, but could artificial intelligence (AI) applications such as ChatGPT be a turning point in this regard?

Yes, plagiarism has been a problem for centuries, both academic plagiarism and student plagiarism, but it has only come to the forefront in the last few weeks, due to new AI technology, in particular ChatGPT. Basically, what ChatGPT does is to democratize fraud.

Any student will be able to write an essay in a few minutes, they will merely have to give it a title and an introduction and then one will be generated automatically. This poses a great challenge and is very disruptive to education, but it also represents great opportunities to support learning because it is a powerful tool in relation to language.

So, should researchers and experts devote more effort to distinguishing between texts produced by humans and those produced by artificial intelligence applications?

One thing we have already established is that it is almost impossible to know whether an essay has been written by a student or by a machine, because these artificial intelligence systems do not copy excerpts of text. What they do is they search millions and millions of words, basically on the Internet, and then they generate a new text in the style of students or academics. They are very powerful and create a highly plausible language, that is unique.

When a tool is being developed to distinguish between texts produced by a human and by a machine, a new development of artificial intelligence may also be taking place that makes it more difficult to detect this difference. So, we will not be able to know whether a text has been written by a student or by a machine. I think that, like any new technology, it causes problems, but I think that the positive aspects must also be analysed.

What do you think might be the positive potentials of artificial intelligence in the field of education?

I think there are many. First, technology can undertake some of the lower-level aspects such as spelling or style of expression, so it is possible to focus more on higher-level aspects such as the argument or ensure that, technically, everything is correct in terms of the research. A big problem with these systems is that machines can produce highly plausible and very seductive texts that appear to have been written by an academic or by a student. But, when you look closer, there are mistakes, it invents references, for example, because it is a language machine, it is not a database. Thus, there are many possibilities and one of them is that students make certain drafts using artificial intelligence and then we make them criticize what is not right, in their own words based on the drafts produced by artificial intelligence.

If groups of students, using artificial intelligence, write two or three texts that have been assigned by the teacher and are left to criticize them, I think that can help students focus more on higher-level aspects and not only on aspects focusing on style. Another thing is that these new technologies, like ChatGPT, can be a conversation partner. So, a student can bounce ideas off it and it can be used, for example, to build an argument. It is given an argument from a certain perspective and this leads ChatGPT to provide you with one using another approach and you enter a dialogue with it. In this way, we want students to be better at arguing and better at holding conversations, while using artificial intelligence as a counterpart.

“New  technologies, such as ChatGPT, can be a conversation partner (...) It is given an argument from a certain perspective and this leads ChatGPT to provide you with one using another approach and you enter a dialogue with it. In this way, we want students to be better at arguing and better at holding conversations, while using artificial intelligence as a counterpart”

Will this involve changing the role of the faculty?

I believe that the role of the teacher should change and not base assessments so much on written texts. We have to decide in which aspects students can use their own thoughts and exercise their own creativity and how to assess them with oral exams, and also assess where we allow students the use of AI, probably for most written texts. We have to help students with technology, not against technology.

But technology is evolving faster than educational systems and will probably, in the short term, cause problems in the educational dynamics in the classroom. What do you advise teachers to do to solve these problems in the short term?

I think it will be very disruptive in the short term, it already has been in the last few weeks. If you had asked students if they had heard of GPT in December, most of them didn’t know about it and, if you ask them now, almost all of them know what it is and barely a month has gone by. Things are moving fast. Students will submit assignments using AI and immediately I think teachers will have to decide whether they will assess them in the same way or change their criteria and whether they will still award points for spelling, when AI also deals with this. I don't think banning students from using it is the answer. Just as we adapted very quickly to the coronavirus and the pandemic, I think we now need to adapt very quickly to artificial intelligence and develop new means of assessment.

“I don't think banning students from using it is the answer. Just as we adapted very quickly to the coronavirus and the pandemic, I think we now need to adapt very quickly to artificial intelligence and develop new means of assessment”

The Institute of Educational Technology in London, of which you are a member of staff,  has addressed in some of its annual reports how education can adapt to the age of artificial intelligence. To facilitate this adaptation, which will also have to be quick, what should be taken into account?

I think we need a new AI literacy, in the same way there has been a need for computer literacy, on how to detect fake news, on how to use spell checkers... I believe that artificial intelligence literacy should first facilitate learning to use the generative systems of artificial intelligence productively, not use them as a substitute for thought, but to enhance the argumentation of thought, to support students’ learning. Secondly, we need to think further, about the impact that AI will have on the world of industry and employment.

In the working world, it will be accepted for you use these artificial intelligence tools to generate business reports; to generate computer programs. Therefore, students must be trained to use these tools because, outside the university, they will be accepted.

During the seminar at UPF, there was a question from students about how all this will affect them and they want to learn, but they are also worried because they do not want to be left behind. Therefore, we must be able to work with students on a new policy that develops productive ways of working with these technologies.

Interview image. 


Might it be a problem to use artificial intelligence for everything?

We have always been dependent on machines. We have always had technologies in education, for example, writing was a technology in ancient Greece. There was a concern that, if you used writing, then the ability to memorize would cease; so we’ve had technology and education for thousands of years.

In our time, the danger has arisen that technology might disempower people and discourage them from learning to calculate, because a pocket calculator does the arithmetic for you, or from studying a language, because there are already machine translators. This is a constant problem of education and should serve to determine how we trust machines and how we teach students to be self-sufficient and autonomous. I don't think it’s easy.

When education managed to accommodate pocket calculators, I no longer expected students to always do mental arithmetic, but we still teach them how to do long division and multiplication. I think the same thing will happen with language generators. We should teach students how to write properly, and continue to teach them how to express themselves and construct an argument, how to write clearly, how to produce a different style. They still need to learn these basic skills, but we will also have to allow them to use technology.

Currently, are the applications developed by artificial intelligence already so developed that they can write like a human or do they have limitations?

They have their limits and their main constraint is that they don't have an internal model of how the world works. They are linguistic machines, so they can build very plausible pieces of text; but sometimes they make really stupid mistakes, for example they invent research studies or produce false academic references, not because they are trying to deceive, but because they are not a database and they are not updated either. They were trained in 2020 and 2021, so they are already a year and a half out of synch and they lack contemporary knowledge. Students will have to understand what limitations artificial intelligence has and cannot accept anything that it produces literally. Therefore, students must be the most critical readers and must learn to read and interpret the results of these artificial intelligence systems.

“Students will have to understand what limitations artificial intelligence has and cannot accept anything that it produces literally. Therefore, students must be the most critical readers and must learn to read and interpret the results of these artificial intelligence systems”

Interview image. 


Is there a risk that the texts produced by artificial intelligence might be biased, depending on the type of data they take as a reference on the network to produce them?

Early GPT-2 systems and early versions of GPT-3 were biased because they were trained on the Internet, with texts found on the network, and the Internet is not an encyclopaedia.  The company has tried to remedy this, seeking to identify examples of bias and then adapt the system to overcome these biases, but this is very difficult. I guess you have to identify all the possible biases: religious, ethical... There are many different types of biases and the more powerful the system is, the more likely it is to be biased, because it will have read more texts on the Internet, not just a few pages of Wikipedia. It will have read blogs and tweets, etc. and many of them will be biased and contain opinions...

Can this pose risks in the field of journalism?

Yes, there is the danger of biases and subtle biases. Gender bias is fairly obvious, but there are more subtle ones, for example, that of a Western- or nation-oriented view of the world; or cultural biases, which are incorporated because of the languages used... So, if you use it for journalism, you have to do it right. If you, as a journalist, only trust that you know the technology to do your job and don't check the facts, you won't be doing good journalism.

What part of creativity can be mechanized and what part is essentially human?

For now, these generative AI tools are highly language proficient, but they don’t have an underlying model of how the world works. They don’t have any data structure that represents this knowledge. So it would be necessary to combine the AI of the neural data network with the oldest symbolic AI, which explicitly represents how the world works; that is, to cross data structures on human knowledge with language machines, but this is a huge task. It needs a phase shift to integrate symbolic AI and the AI neural network and we are at an early stage to be able to do that. We don’t even know what will be possible; so it’s a very exciting time.

We have a great many possibilities in the future with artificial intelligence and creativity, not only for writing, but it can also be applied to art, video, discourse...

Although we are only in the initial phase of what artificial intelligence can represent, do you think that policies and laws in the field of education, the university and so on will have to change in the AI era?

Yes, for example, EU copyright laws exclude artificial intelligence machines, since they assume that creative works are only produced by humans, but works that are produced by machines are being achieved.

There will be a lot of pressure on universities, schools, institutions, the media and so on to adapt, and it’s always dangerous because you can make mistakes. So I hope universities will change effectively, working with students to make this change. Not only universities will have to do it, since artificial intelligence tools can have a huge impact on the labour market and there are some professions that will disappear in the future. I believe that, as with every new technology, some professions will disappear and others will be created.

“There will be a lot of pressure on universities, schools, institutions, the media and so on to adapt, and it’s always dangerous because you can make mistakes, so I hope universities will change effectively, working with students to make this change”

If new professions are created, will there also be a need to create new types of studies at universities?

What I expect is that there will be more interdisciplinary studies, because these are the intersection of neuroscience, data science, machine learning, symbolic artificial intelligence, mathematics, natural language... Therefore, if we really want to move forward, we must have a very radical interdisciplinary approach. 

I worked at the University of Sussex in the School of Cognitive Computing Sciences and, in the same corridor, you had a mathematician, a neuroscientist, a natural language processor, an expert in artificial intelligence... who spoke together and it was really exciting and enormously productive. I believe that the way forward, radically, is to fuse teams from different disciplines and here at UPF you have a great opportunity to do so.

“I believe that the way forward, radically, is to fuse teams from different disciplines and here at UPF you have a great opportunity to do so”

Is there a future for humanistic studies?

Yes, but I think they will be humanistic studies that will include the intelligence of machines, as tools capable of generating creative content. Modern musicians already use technological synthesizers or reverb units as an essential part of their creativity.

I believe there will be humanistic studies in which we will have to look for ways in which technology can be put at the service of humanity and do so with great care, because there are great dangers. As humans, we have always adapted to our environment and we have also created our environment and now we are creating a computer and machine environment and we are adapting to it. So we will have to make humans learn to be creative in another way, with a creativity that will involve the new machines.

“We will have to make humans learn to be creative in another way, with a creativity that will involve the new machines”

Long before the current debate on the risks of artificial intelligence in education, many other digital and multimedia technologies had already been introduced to the classroom. In fact, you have been researching into technological applications in education since the 1970s. In general terms, how do you think technology has had a positive impact on education over the last few decades?

I think that, in general, two parallel paths have been followed. On the one hand, there is the path in which they have tried to invent new machines for teaching, for example the Skinner (a mechanical device to control the progress of students through programming), types of personalized learning machines, smart tutoring systems... This path continues by gaining a greater number of increasingly sophisticated teaching technologies, teaching machines and assessment machines, and tools to give support to teachers. Then there is the more subversive path, in which students have adopted the technology that helps them learn often in confrontation or in opposition to traditional education systems, whether they are pocket calculators or mobile phones or automatic translation systems. This second path corresponds to what is happening now. Our students have suddenly become drivers of AI in universities and the universities must react.

But we could also combine these two paths, because, thanks to artificial intelligence, you can have students with more power, but at the same time with more tools to enjoy better learning.

Do you think any universities are implementing any interesting good practices regarding the integration of artificial intelligence in teaching?

I think there are stages with each new technology and they always seem to be the same: first, ignore it and wait for it to go away; then ban it or at least restrict its use; and, finally in a later stage, accommodate it and try to develop a new pedagogy around it. This is so new that we have gone through these stages in a matter of weeks rather than months or years. 

Over the past year, I think most universities have ignored it, although it was clear that students were already starting to use artificial intelligence technology. Now, some universities have begun to restrict it and, in some, it is being experimented with, but this tends to be rather on the initiative of individual professors who are experimenting with their students writing research papers or technology blogs, but what is clear is that today this is a question of two or three academics. I don’t think any university has applied any profound changes to its policies to incorporate AI systems or change its teaching methods by incorporating it, because no university policy has yet been drafted concerning this. That will take some weeks or months, and this has happened so recently... I think it will happen and I think there will be some pioneering universities; but it hasn’t happened yet.

“I don’t think any university has applied any profound changes to its policies to incorporate AI systems or change its teaching methods by incorporating them (...) I think it will happen, and I think there will be some pioneering universities”

Multimedia

Categories:

SDG - Sustainable Development Goals:

Els ODS a la UPF

Contact

For more information

News published by:

Communication Office