Artificial intelligence, including generative AI systems based on models that include ChatGPT, has grown rapidly and opened up immeasurable opportunities for businesses, better governance, healthcare, and the energy industry. It has, however, also opened the door to be a disruptive technology that presents many risks that range from privacy violations and misinformation, as well as a potential threat to human security.
“The hallucinations of AI is the top priority to solve because hallucinations misinform a lot of people. When AI ‘hallucinates’, it generates untrue results not backed by real-world data and the whole AI research community is trying to find a solution,” Dos Bakytzhan, founder of GoatChat AI, told NE Global in an exclusive interview during the Astana International Forum on June 9.
When generative AI lies
An AI hallucination is when a large language model, or LLM – the type of AI algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate, and predict new content – generates false content.
LLMs are AI models that power chatbots, such as the California-based Open AI’s ChatGPT.
What happens to your data?
Another issue is privacy. “Right now, no country in the world exists which understands how to regulate that stuff because its new, its fresh, and we have to make sure that we don’t do it the wrong way, like in Italy, where they banned AI apps and then later regretted the decision and shortly reversed their decision thereafter,” said Bakytzhan said, who is listed on Forbes’ Under 30 list.
Italy banned the use of ChatGPT in March after the application suffered a data breach, but the application was able to reenter the country once again about a month later.
“Lots of people are chatting with AI, and they are trying to get advice based on the information they are feeding it,” added Bakytzhan, before mentioning that people are also concerned how the data they feed artificial intelligence is going to be used.
Bakytzhan Believes that as AI continues to develop, that the inevitable forms of regulation that will follow need to be dynamic and adaptive as the large language models advance.
“In the next 6 to 12 months, the large language model is going to be in our life. You will use it everywhere, and maybe, each government should have its own policy and will need to work with the public in their [respective] countries [to formulate regulatory policies] … Maybe something similar to the GDPR, (the EU’s General Data Protection Regulation that came into effect in 2018) but with a softer footprint, because AI will go faster and become more advanced as the users train it,” he said.
Bakytzhan proposed that AI follow the same rules as the databases existing in software. Combining AI with knowledge graphs, which is a logical way to capture data relationships and convey their meaning, can improve the accuracy of the outcomes and increase the potential of machine learning. Knowledge graphs drive intelligence into the data itself and offer AI the tools to make sense of it all. This gives AI the context it needs to be more explainable, accurate, and repeatable.
“You have to have find new ways how to regulate that graph knowledge and identify what could be stored and what could be the next privacy model for the knowledge graph,” he said, noting that AI has a lot of vulnerabilities, including that it can easily be hacked. “Of course, it would have to work over GDPR compliance, because the Internet works over GDPR compliance.”
The need for regulation
Bakytzhan said differential privacy, DP, as a mathematical model, can be broadly applied to AI and help solve privacy problems. He explained that the differential privacy allows neural network training through an algorithm that neglects personal information and does not allow to identify users by their chatting behavior.
“At the same time, it’s weakening the large language model. In that sense, it’s a tradeoff,” the founder of GoatChat AI said.
The advancement of AI has prompted many governments to pay closer attention to the technology and consider regulations. On June 14, the European Parliament adopted its negotiating position on the Artificial Intelligence Act, a move that is intended to ensure that AI is developed and used in Europe fully in line with the EU’s views on human oversight, privacy, and transparency.
With that in mind, Bakytzhan believes there is no limit in how far AI can go. “The first thing is to generate stuff. The second is engine-based systems where artificial intelligence and the large language model could do, simultaneously, millions of tasks. These engines can communicate with each other and send data back and forth.”
Is the Skynet scenario a real threat to humanity
Turning to the ‘Skynet scenario’ – the fictional artificial neural network-based conscious group mind and artificial general super intelligence system that serves as the antagonistic force of the Terminator franchise – Bakytzhan said AI competitive behavior could allow this to happen.
“Regarding Skynet, it was military software and was launched as a game for the users. There is certainly a possibility that would happen. That’s why starting from 2023, we must pay attention. This is much more of an engineering job that has to be done. All the laboratories that exist, all AI research teams, should bring their knowledge together to solve this big question. It’s very stupid to predict the future but what we know for sure that once you apply AI to cyber sports games … you see a spike of aggressiveness; a spike of over-rationality over sensitivity. That could be a question to pay attention to. For instance, in the case of Dota 2, they are training a model that plays by itself. In just three days it outperforms all the top pro-level players commutatively.”
AI would act very aggressively because its objective function is to acquire points and win the game no matter what the cost.
“It would go and die,” he said. “And that worked, but that was very cruel and aggressive. The computer outcompeted everyone. A regular player playing the same game wouldn’t sacrifice himself in those cases. Like in chess, if you tell people, ‘You play like a computer’ it’s like making a ridiculous move, maybe sacrificing the Queen. But at the end, the computer is going to win,” Bakytzhan said.
“Shall we play a game?”
The power of AI, including same scenario mentioned by Bakytzhan, has long been a part of a societal debate, including in pop culture. A plot line in which a computer sacrifices itself at all cost to ‘complete the mission’ or ‘win the game’, by utterly disregarding the human factor, played central roles in two film classics – Stanley Kubrick’s 1968 masterpiece, 2001: A Space Odyssey; and 1983’s Wargames.
In the former, a space station’s computer, HAL, overrides the decisions made by the human crew in order to fulfill the astronauts’ mission by killing them in process. In Wargames, a computer system, JOSHUA, which was designed to prevent a nuclear war opts to launch its own preventive tactical strike when its human programers attempt to order it to stand down.
The next step is solutions for Emotional AI, a reference to technologies that use affective computing and artificial intelligence techniques to sense, learn, and interact with human emotional life.
“There are many labs around the world, including MIT, trying to teach AI to be more emotionally sensitive,” he said. “That’s a tough job. By the end of the summer, we should come up with new ideas and new solutions on how to solve different cases. Right now, there is no silver bullet.”