Most people know that robots no longer sound like tinny trash cans. They sound like Siri, Alexa, and Gemini. They sound like the voices in labyrinthine customer support phone trees. And even those robot voices are being made obsolete by new AI-generated voices that can mimic every vocal nuance and tic of human speech, down to specific regional accents. And with just a few seconds of audio, AI can now clone someone’s specific voice.

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 days ago

    Should, but won’t. The genie is out of the bag, there’s not putting it back… and it was a flimsy bag to begin with.

    Reminds me of “The Bicentennial Man”, when people decided to turn against humanoid robots. It won’t happen, some people are already spending a fortune on humanoid silicone dolls, humanoid robot slaves is a much more likely future, with all it entails.

    Even worse: if modulation were to be forced by law to make AIs sound robotic… scammers —who are already breaking the law— would have a field day by using non-modulated voices.

  • MHLoppy@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    5 days ago

    I can get behind the general idea, but in this implementation specifically it seems like the low modulation example isn’t distinct enough from simply lower-quality audio, but the higher modulation example (where the effect is more distinct as an intentional effect), is just not nice to listen to. Maybe there are other ways to distort the voice that don’t have as much of that downside?