A lawyer in New York has precedent for ChatGPT research litigation. But AI is simply making judgments. This is not the only case where artificial intelligence has shown negative effects.
The case doesn’t sound complicated: A man is suing an airline after his knee was injured by a service trolley during a flight. His attorneys are seeking a settlement judgment to back up the lawsuit. And try chatbot ChatGPT. He also spits out cases: “Peterson v. Iran Airways” or “Martinez v. Delta Air Lines.” The bot even provides them with a file number.
But when the lawyer submitted the application to the court, he found that the case was false. It’s a dangerous process, said Bruce Green, director of the Law and Ethics Institute at Fordham University in New York. The presiding judge said the case was unprecedented. The legal profession was shocked. The plaintiff’s attorney affirmed under oath that he did not want to deceive the court, but instead relied on artificial intelligence.
“Examining Artificial Intelligence Research”
This is of course carelessness, possibly even recklessness, Green said: “The rules for American lawyers are very clear: They must be comfortable with the new technological tools they use, and they must be aware of the dangers and pitfalls.”
Anyone who knows the ChatGPT program knows that it can also invent things. “If the lawyer knows how to use the program to do his research, he should be smart enough to know that the research being done with AI needs to be cross-checked.”
Data protection is another issue
Some U.S. judges are now calling for regulation of the use of artificial intelligence in the U.S. justice system. Green also sees danger: Evidence of chatbot use doesn’t just contain incorrect information. It can also violate the confidentiality that lawyers must guarantee to their clients. “For example, information that customers do not want to disclose: If it is fed into artificial intelligence, it can be disseminated further.”
Chatbots like ChatGPT have sparked a lot of discussion about AI applications in recent months. This software is trained on the basis of massive data. Experts warn that the technology could also output false information.
Tips for Eating Disorders
Even dangerous ones, like the chatbots used by America’s largest eating disorder nonprofit. New York-based NEDA replaced about 200 employees on its helpline with a chatbot called “Tessa,” developed by a team at Washington University School of Medicine in St. Louis. “Tessa” is trained to use therapy to treat eating disorders. But those who asked for help were surprised.
For example, Sharon Maxwell. She suffers from a severe eating disorder: “The chatbot told me to lose a pound or two a week and cut my intake by up to 1,000 calories a day.” Three of the ten suggestions the chatbot provided were diet-related. The secret to getting her into an eating disorder spiral years ago. Maxwell says that kind of mechanical advice can be very dangerous for people like her.
Artificial Intelligence Is Not Ready for Therapeutic Talk
The activist reminded her followers on social media. Many people have had similar experiences.But NEDA has already reacted: the organization has noticed that the current version of “Tessa” may provide harmful information, this Not in the spirit of their plan. “Tessa” was temporarily out of circulation and is now being re-censored.
That applauds the team leaders who developed the chatbot.Alan Fitzsimmons-Craft says arrive New York ARD studio: AI is not yet mature enough to be open to people with mental health issues. That’s why “Tessa” was originally built without artificial intelligence. The user company later added this component to the robot.