Artificial intelligence sparks widespread debate
A few weeks ago, Microsoft co-founder Bill Gates spoke about the revolution generative artificial intelligence tools will bring to the world, praising the role they will play in helping humanity make progress in every field.
Gates concluded his long speech on the new technology by saying, “As we create robots powered by generative artificial intelligence, we just need to make sure they don’t get Alzheimer’s disease.”
Dementia, Hallucinations, and Alzheimer’s
While it was widely believed that Gates’ use of the term “Alzheimer’s” was a joke, it turned out that the Microsoft founder was not joking, and that programs and robots working with generative artificial intelligence can cause Alzheimer’s Syndrome, dementia and hallucinations, and this is not the result of analysis, but has already happened in the ChatGPT application, OpenAI, which caught up last week, announced that it will solve the hallucination problem of artificial intelligence by adopting an updated method to train language models .
Dementia or AI hallucinations occur when bots such as ChatGPT, Bard, or other similar programs fabricate events that are not real and insist that they are indisputable facts. These hallucinations are a serious dilemma, especially when they are recognized by humans.
Christophe Zogby, founder and CEO of Zaka Technologies, told Sky News in an interview that the story began when a lawyer in the US used the ChatGPT program to help him convict an airline and order it to compensate a passenger who suffered damage from a food truck. The injury caused was on the plane. The lawyer asked ChatGPT to find the case law of the airline being convicted of similar incidents. Sure enough, the program provided him with 6 related past cases, so he later realized that the judge of the court believed that the above case law was not true in reality. does not exist.
Al-Zoghbi added that what happened prompted the U.S. attorney to admit that he used ChatGPT to complete his legal research, indicating that the program assured him that the information was correct and suitable for his claim, refusing to provide him with sources and legal references to it because What happened showed that ChatGPT and all similar programs could be infected. suffer from dementia.
According to Zogby, ChatGPT is based on a language model that is trained to read large amounts of text to synthesize a comprehensive text, because the purpose of the application is not to search for correct and accurate information, but to write convincing while training text, and it doesn’t care whether that information is correct, so what matters to him is that its place in the sentence is appropriate, so he can invent names and events as long as they are consistent with his text, and he insists that They are real.
Zogby thinks what’s happening is dangerous because people are using generative AI software and using it without verifying the accuracy of the information he’s giving them, they’re getting text that contains a lot of incorrect information, so The advice that can be given here is not to use these programs when searching for specific information, but to turn to the famous Google search engine, which provides answers attached to links whose validity the user can verify.
Will this issue be resolved in the end?
Zogby revealed that the problem of ChatGPT or other similar programs affecting dementia or hallucinations may not be 100% resolved. Even if this technology is backed up by the ability to verify stories or events, we may reach a stage of reduced hallucinations that won’t happen in the near future.
For her part, Karen Abbas, a digital marketing expert, told Sky News Arab Economy that the reason for calling the description “Alzheimer’s” or “dementia” or “hallucinations” Rather than saying he was “lying” about what ChatGPT or other AI programs did because the program didn’t realize it was lying, in moments of “uncertainty” it analyzed the data it had to One that it thinks is true, like when a person suffers from dementia, Alzheimer’s disease, or hallucinations, he believes that what he says is 100% true. Therefore, an AI robot should not be viewed as a rational human being who has the answers to all the questions we might need.
Abbas also confirmed that with the progress of time and the development of technology, the proportion of dementia in artificial intelligence programs will increase, but it will be difficult to completely end, so it is very important for the user to ensure the correctness of the information he obtains through these programs. accuracy.
UK warns of artificial intelligence dangers to national security