Riot Games will soon be investigating reports of Valorant players being abusive in voice chat so it can build an automated system dedicated to analyzing those problematic comms.
“Voice evaluation would provide a way to collect clear evidence that could verify any violations of behavioral policies before we can take any action,” which should limit abuse, and “help us share back to players why a particular action resulted in a penalty,” Riot Games says(Opens in a new window)†
I’ve played a lot of Valorant, and I know from experience that some players constantly use racial slurs, scream obscenities, and otherwise abuse the members of their teams. (Which shouldn’t come as a surprise to anyone who’s used literally any other game’s voice chat.)
Valorant allows players to mute each other, but unless someone is willing to accept the competitive disadvantage of playing a tactical shooter without voice comms, odds are that they’ll have to endure at least some amount of vile behavior before they mute a particular player.
Reporting the player in question offers a better solution because Riot Games can simply ban them from voice chat—or from Valorant entirely. Those are drastic measures, though, which explains why the company wants to exercise some caution before it swings the banhammer.
Recommended by Our Editors
So now it’s planning to start testing its new system on English-speaking players in North America. The data collected during that period will be used to “help train our language models and get the tech in a good enough place for a beta launch later this year,” the company says.
“This is brand new tech and there will for sure be growing pains,” Riot Games says. “But the promise of a safer and more inclusive environment for everyone who chooses to play is worth it.”
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.