The threats and opportunities posed by artificial intelligence
Artificial intelligence, or AI, has taken the world by storm over the last years. While AI is not a new concept, both in fiction and in reality, it has become part of the global zeitgeist, in no small part due to its rampant presence in social media and its growing relevance in the job market.Professor Alexiei Dingli is an academic with a deep understanding of AI and is a professor of artificial intelligence at the University of Malta's Faculty of Information and Communication Technology. TVM recently aired its first show hosted by an AI presenter, in a series called Artificial Intelligence in our lives, with an avatar modelled after Dingli.The Malta Independent on Sunday spoke to Dingli, and asked him what the aim of the programme is, as well as whether it is a proof of concept for something else, or simply an educational effort.Dingli replied that the scope of the project was to teach people about AI. He said that while he has done a lot of research and writing about AI previously, the majority of his work has been in English, and so he received a lot of feedback asking him if there could be some content in Maltese. He said that it can sometimes be somewhat difficult to find the right words or terminology in Maltese when discussing technical subjects, and so he considered that aspect to be a bit challenging, but remarked that the programme ultimately began as a way to teach and inform people about AI, "mainly what it is and what it is not"."Many people don't know what AI really is or means," Dingli said. There are many preconceptions originating from Hollywood, with there being a plethora of fictional examples of robots trying to take over the world. "That is not going to happen, in the immediate future at least," the AI professor commented.He said the programme was essentially intended to expose people to the different facets of AI, covering topics such as health, transportation, cybersecurity, education, industry, labour, and personal relationships. He added that the programme is intended to be as balanced as possible, highlighting the good and bad of AI.Dingli said that while working on the show, the team began thinking about having an AI avatar to present the programme, which is how the avatar modelled after him eventually came about.He spoke of how his mother-in-law had called his wife after the programme first aired, and how she had thought that the AI avatar presenting the show was actually Dingli himself."Many people do not realise what you can do with AI now, so we used the occasion to show people that not everything you might see in this day and age is real, and you need to validate and verify everything, so we're dealing with that aspect as well," he said.With AI becoming so prevalent, concerns have been raised about mixing AI with the dissemination of news or information, especially considering the pre-existing struggle against fake news."I don't have concerns, in fact, I think that AI can help news," he said.He thinks it is important that media houses remain and that they are strengthened further, as something which concerns him is the "wave of misinformation, where nowadays every Tom, Dick, and Harry can go online, write an article about whatever, add a photo or video about whatever, and publish".He remarked that "unfortunately, many people do not question, they just believe and share". He continued that an advantage of news houses is that their name is behind what they publish, "so I can be sure that if I read an article from a reputable news house, then that article has a level of quality, and it was verified or validated... It gives me that peace of mind".He said that the reason why he is not concerned is because he believes AI can be an opportunity for news to reach more people. He used the example of an English newspaper potentially using AI to also have a Maltese version available. He also used museums as an example, saying that children may think that museums are boring due to the content not being customised to interest them, "but I think that AI can be a medium to repurpose that content in such a way to reach even more people, and that is why I see AI as an opportunity more than a threat".AI has developed significantly over recent years, and one aspect of that is AI video generation. One example to clearly illustrate AI video generation's rapid progression is the Will Smith spaghetti test. In 2023, a poorly generated video of Will Smith eating spaghetti went viral, with many in reaction to it speaking out about how AI had a long way to go before it could have a serious impact on creative industries. However, only two years later, people are still using generated videos of Will Smith eating spaghetti as an informal benchmark to assess AI video generation, and the results have improved drastically in that relatively short time.Dingli was asked if it is feasible for AI-generated videos to become basically indistinguishable from reality in some short time, and whether he believes that poses any potential risks.He replied that yes, it is feasible, and that he believes it poses a lot of risks, adding that it is part of his biggest concern. "What if it's a court case against you, and I get a video of you doing something?" He believes the system needs to be rethought, and spoke about the importance of having systems to detect and flag AI videos.The academic said that a detection system is not an infallible one, and that it can and will fail, but added that this is why people need to think critically as well in addition to AI tools that can help with flagging."One fine day, I don't know when, it will become indistinguishable, and the question is what happens then. That's why I'm saying not to just rely on the AI tools. The tools are there to support you, but more importantly, it is the individual who needs to check the content that they are consuming."Asked what are some fields he anticipates will be particularly reshaped or affected by the use of AI, Dingli said that he believes all fields will be affected, but that research is showing that what is mostly being affected is a cohort of employees at the lower levels. He continued that he is not referring to industry, but junior positions in general with tasks such as researching, summarising, and translating things. He referred to junior lawyers as an example.Dingli said that this leads to another problem, with that being if companies are not employing people to those positions, then there will be a lack of experienced people to take up senior positions down the line. "I think it would be very short-sighted for a company to say that it will not hire juniors because it has AI. I think the approach should be that if you used to bring in 100 people, now you bring in 50, and use the seniors to give them the additional knowledge that AI will not have."He continued that it is a totally different thing when talking about industry, as he remarked that there are two issues in that regard, which he calls "the AI paradox"."You have those people who are low-skilled low-paid where they will end... According to the World Economic Forum, automation this year in industry will reach 50%. So for the first time since the Industrial Revolution, there will be more robots than people working in industry. Where the trends are going is very obvious, and I would guess that in five, maximum 10 years, there will be full automation," he said.Dingli continued that, on the other hand, having a factory with this automation means that one would also need a number of engineers and data scientists, and he remarked that not enough of these professionals are being produced. "So the problem is that you have those who are low-skilled low-paid who will lose their jobs, and you also have the high-skilled high-paid who we are lacking, and by 2030 this will be a massive problem worldwide."Another problem, he said, is that one cannot just take people and move them, as there is a need for upskilling, reskilling, and retraining, "and if we have to be honest, not everyone will make the transition, so as a society, we need to take care of the people who will not manage the transition as well".Dingli said that AI is a very important tool, and, referring to a quote by Stephen Hawking, said that AI is "probably the most powerful tool ever invented by humans, but it depends on how we are going to use it". Dingli likened AI to atomic energy, which he commented can also be used to create atomic bombs. "So let's be very careful, because it can be very dangerous as well."
Professor Dingli recently launched "Artificial Intelligence in our lives" on TVM, a series hosted by an AI avatar modeled after himself. Speaking to The Malta Independent on Sunday, Dingli explained that the program aims to educate the public about AI, clarifying "what it is and what it is not." He noted that while he has extensively researched and written about AI in English, there was a demand for content in Maltese, which presented a unique challenge in finding appropriate terminology.
Dingli emphasizes that many people's understanding of AI is skewed by Hollywood portrayals of world-dominating robots. "That is not going to happen, in the immediate future at least," he assures. The TV program explores various facets of AI, including its applications in health, transportation, cybersecurity, education, industry, labor, and personal relationships, striving to present a balanced view of both the benefits and drawbacks.
One of the program's innovative features is the AI avatar presenter, which sparked considerable reaction, including Dingli's mother-in-law mistaking the avatar for the professor himself. This highlights the increasingly sophisticated nature of AI and the need for critical evaluation of information. Dingli stresses the importance of verifying information, especially in an age of rampant misinformation.
Despite concerns about AI potentially exacerbating the spread of fake news, Dingli believes that AI can actually enhance news dissemination. He emphasizes the crucial role of established media houses in providing verified and validated information. AI, he suggests, can be used to reach wider audiences, such as translating news articles into different languages. He also suggests museums could use AI to repurpose their content in a way that is more engaging to children.
However, Dingli acknowledges the potential risks associated with increasingly realistic AI-generated videos. Referencing the "Will Smith spaghetti test" as an example of the rapid progression of AI video generation, he warns that the ability to create indistinguishable-from-reality videos poses a significant threat. "What if it's a court case against you, and I get a video of