Back Artificial intelligence: from fiction to reality

Artificial intelligence: from fiction to reality

Opinion from Anders Jonsson, profesor at the Departament of Information and Communication Technologies (DTIC) and head of the Artificial Intelligence and Machine Learning Research Group.
13.03.2018

 

The use of Artificial Intelligence (AI) will grow exponentially in the coming years. Despite this, people need an explanation as to why AI algorithms take certain decisions. Therefore, there are ethical and legal issues to solve: who is responsible if something goes wrong?

Artificial Intelligence (AI) is an area of computer science that studies the automatic generation of intelligent behaviour, where the concept “intelligent” is defined in relation to the difficulty a human being would have to perform a certain task. The computational problems that are historically considered part of AI include reasoning, the discovery of knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects.

AI has experienced an unprecedented boom in the last decade, especially in the form of deep learning. Many problems that were beyond the reach of computers are now solved via AI algorithms. Some notable examples include the ability to recognize objects in images, translate texts between different languages, recognize and filter audio signals in songs or speeches, and play complex board games like Go.

Although it is not always visible, AI is playing an increasingly important role in society and in people’s everyday lives. Many websites use AI algorithms in the form of systems of recommendation to customize the contents displayed to users. Many financial transactions are made in the form of algorithmic trade. Robots are usually partially controlled by AI algorithms (in some areas of the car industry, there is more than one robot to every ten employees).

The use of AI will grow exponentially in the coming years. It will be possible to perform many tasks in medicine efficiently with AI. Autonomous vehicles will be common in a few years. Online assistants will help users perform many tasks on the Internet. Smart cities and Internet propose using networks of sensors to predict events and suggest actions to improve people’s quality of life.

The large-scale implementation of AI systems in society is very simple. Despite this, people need an explanation as to why AI algorithms take certain decisions. People could lose their jobs as a result of the incorporation of AI, by making a robot perform the same tasks more efficiently. In addition, some AI systems need to have the right infrastructure (for example, the digitizing of hospitals). Therefore, there are ethical and legal issues to solve: who is responsible if something goes wrong?

Due to popular science fiction, the public perception of AI is often only partial, and its current limitations require highlighting. Algorithms are enabled to solve a single task in isolation, and the knowledge they obtain is rarely transferred to other tasks. In addition, they do not have the ability to draw conclusions from what they have learned, to perform deliberative reasoning or imagine the world beyond the task they are presented with. The development of systems with high level cognitive skills is a topic of ongoing research (under the name of permanent learning), but probably it will take a long time before these systems are ready for deployment.

Opinion from Anders Jonsson, profesor at the Departament of Information and Communication Technologies (DTIC) and head of the Artificial Intelligence and Machine Learning Research Group.

Multimedia

Categories:

SDG - Sustainable Development Goals:

Els ODS a la UPF

Contact