With OpenAI's new AI, your cell phone should be able to talk more and more like a human. (Image: Icon)
© Getty Images/iStockphoto/mikkelwilliam/istockphoto.com
Last week, OpenAI published a report highlighting bugs and security issues in its new AI model. GPT-4o It will be described in detail. This post revealed interesting details about the artificial intelligence that has been present in the popular chatbot for some time. ChatGPT Integrated.
The report shows, among other things, that the new language mode that aims to sound particularly human is sometimes simple. Imitating users' voices – without asking permission first, as Ars Technica reports.
➤ Read more: Homework and Sexual Arousal: What Are AI Chatbots Really Asking?
A short clip is enough to reproduce the sound.
The AI model is currently not allowed to clone voices for understandable reasons – otherwise for example Cheater sounds Imitate. However, as the system becomes more complex, it is technically possible that ChatGPT will be able to imitate any voice imaginable in the future. A short recording played back will suffice.
The idea of ChatGPT suddenly imitating your voice during a conversation is scary. BuzzFeed Data Expert Max Wolf He appropriately wrote on X: “OpenAI just revealed the plot for the next season of Black Mirror.”
OpenAI sets strict rules
To prevent voice imitation, OpenAI only allows ChatGPT's AI to do this. Use selected soundsThe model must not deviate from this, even if it has the capacity to do so in the first place.
OpenAI also has aOutput Classifier“It was developed to prevent deviations. Based on our internal assessments, our system currently detects 100% of all significant deviations from the system voice,” OpenAI says. These security mechanisms are meant to ensure that ChatGPT speaks to its users in a convincing human voice — but not in the user’s own voice.
Lifelong foodaholic. Professional twitter expert. Organizer. Award-winning internet geek. Coffee advocate.