Fred, a trans The man, his mouse click, and his tenorful voice suddenly sank deeper. He introduced a voice-changing algorithm that sounds like an instant vocal cord transplant. “This is a ‘Seth,'” he said, “about a man he was examining in a zoom call with a reporter. Then, he began to speak as” Joe, “whose voice was more nasal and brighter.
Fred’s friend Jane, a trans woman who tested the prototype software, smiled and showed the artificial voice that they liked for their feminine sound. “It’s a ‘Courtney'” – bright and shiny. “Maya ‘here” – higher pitch, sometimes too much. “It’s ‘Alicia’, which has the most vocalization I’ve ever found,” he concluded more gently. The errors were so fleeting that it provoked fleeting thoughts that the pair might not have joined the call with their “real” voices.
Fred and Jane are early testers of technology from startup modules that can add new fun, security and complexity to online socialization. WIRED is not using their real names to protect their privacy; Trans people are often victims of online harassment. The software is the latest example of the clever possibilities of artificial intelligence technology that can synthesize real visual video or audio, sometimes called DeepFake.
Modulate co-founders Mike Papas and Carter Hoffman initially thought the technology could make gaming more fun by taking players’ character voices as their “voice skins.” Since the pair hired studios and hired preliminary testers, they also heard a chorus of interest in using voice skins as privacy ield. More than 100 people asked if technology could ease dysphoria caused by a discrepancy between their voices and gender identities.
“We understand that many don’t think they can participate in the online community because their voices put them at even greater risk,” said Pappus, CEO of Module. The company is now working with game companies to provide voice skins that offer both fun and privacy options, as well as promise themselves not to be a tool of fraud or harassment.
Games like Fortnite and social apps like Discord have made it common to join voice chats with strangers on the Internet. Like the early days of texting via the Internet, the voice boom has opened up new joys and horrors.
The Anti-Defamation League last year saw that nearly half of players were harassed through voice chat, more than text. A sexist continuity in gaming culture forces women and LGBTQ people to be lonely for special abuses. When Riot Games launched the team based shooter Evaluation In 2020, executive producer Anna Donlon said she was shocked to see a culture of sexist harassment emerging so quickly. “I don’t use voice chat when I’m alone,” he told Wired.
Modulate’s technology isn’t widely available yet, but Pappas says he’s in talks with game companies interested in setting it up. One possible approach is to create a mode within a game or community where everyone is assigned a voice skin to match their character, whether barbaric troll or knight; Alternatively, the voice can be assigned randomly.
In June, Modulet’s two voices were launched in a preview of an app called Animate, which transforms users into digital avatars in livestream or video calls. The developer, Holotech Studios, markets voices as a privacy feature and as a way to “transform your voice to better fit a character with a different age, gender, or body type than your own.” Modulate games also offer company software that automatically notifies moderators of signs of abuse in voice chat.
Modulate voice skins are powered by machine learning algorithms that adjust the audio patterns of a person’s voice so that he or she sounds like someone else. To teach its technology to give voice to a variety of melodies and timbers, the company is collecting scripts from hundreds of actors and reading scripts designed to provide a wide range of tones and emotions. Individual voice skins are created by tuning algorithms to replicate the sound of a specific voice actor.