Main menu

Pages

Mercedes brings ChatGPT to cars — and hopes to trick AI bugs out

S class

Mercedes is working with Microsoft and is starting an initial three-month testing phase.

(Photo: Daimler AG)

New York, San Francisco Mercedes-Benz wants to improve the voice control of its vehicles and is testing ChatGPT, a text bot from Microsoft partner OpenAI. A corresponding testing phase will begin this Friday, as the automaker announced on Thursday.

Accordingly, Mercedes customers with MBUX infotainment systems in the US can participate: more than 900,000 vehicles. It should be possible to register through the app.

Drivers who invoke voice control with the “Hey Mercedes” command should be able to communicate with the car “more intuitively” thanks to artificial intelligence (AI). A language assistant should be able to tell interesting things about a destination, suggest a new recipe idea or clarify a knowledge question. What makes ChatGPT special is not only that it can understand complex input, but also that it can hold longer conversations and ask more questions.

“The integration of ChatGPT is a real milestone for us in making the car the center of our digital life,” said Mercedes CTO Markus Schäfer. The pilot project adds ChatGPT’s functionality to existing “navigation input, weather queries, and more.” , to enable “conversations with natural dialogue and follow-up questions”.

Eric Boyd, AI boss at Microsoft Cloud Spares, said the system could be equipped with additional functions, such as reserving a table at a restaurant or buying movie tickets. OpenAI’s most powerful language models are already running in the background: GPT-4 and GPT-3.5.

The testing phase of the AI ​​voice control was initially limited to three months. Based on the results of this beta test, Mercedes is examining whether in the future it should offer large language models for “conversational communication” in its vehicles, a spokesman told Handelsblatt when asked. “We started with ChatGPT because that’s the market-leading model right now.”

fight against hallucinations

A theme that is relevant not only to ChatGPT but to all other large language model providers is so-called hallucinations: AI errors and misrepresentations. This is especially problematic in a car where the driver must focus on grip.

Mercedes is testing a special way to suppress hallucinations: its own system cross-checks AI information. “To rule out hallucinations, we rely on plausibility checks on ChatGPT output,” the spokesperson said. The Mercedes Intelligence Cloud should check so-called “point-of-interest” recommendations — that is, references to restaurants, gas stations, or other destinations — to see if those places actually exist.

According to company circles, Mercedes uses its own verification data, such as search results from Google. The automaker stresses that you “always have sovereignty over back-office IT processes.”

false statements in court

Mistakes and misstatements are the biggest problems with large-scale deployment of large language models such as GPT-4. While the quality of model languages ​​from companies like OpenAI continues to improve, they often produce fictitious statements or invent complete sources.

Still, many companies are already using the software — sometimes with serious consequences. New York attorney Steven A. Schwartz was found guilty in a court case not only of letting ChatGPT create the files, but of making serious mistakes in the process.

>>> read here: Is the AI ​​bill slowing the development of AI in Europe?

Schwartz filed a lawsuit against Avianca Airlines on behalf of a client. During one flight, his client is said to have injured his knee by an airplane trolley. In court filings, Schwartz cited other cases that were said to be designed to support his clients’ efforts to obtain compensation.

The judge examined the documents but could not find the case cited. When asked, Schwartz acknowledged that ChatGPT had submitted the cases to him, but he hadn’t checked. “Six of the cases filed appear to be falsified court decisions, including falsified citations and falsified internal references,” Judge P. Kevin Castel wrote.

Tech companies such as Microsoft and Google are working with hundreds of experts to reduce the susceptibility of systems to errors. However, Microsoft CEO Satya Nadella admitted: “The results are not yet 100% reliable.” In demos of artificial intelligence systems from Google and Microsoft, carefully selected case studies all contained incorrect numbers or were incorrect. statement.

more: Mercedes Autopilot California allows “Drive Pilot”.