this post was submitted on 14 Jan 2024
19 points (71.1% liked)

Gaming

20177 readers
65 users here now

Sub for any gaming related content!

Rules:

founded 5 years ago
MODERATORS
 

Inspired by a discussion I had elsewhere and the article "Women in Games swaps male and female voices to highlight harassment in gaming", how about we start a voice modulation challenge where you have to play at least one online game with a voice modulator to sound like a girl?

I'm curious what the experiences would be like.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 11 months ago (2 children)

I was very glad to read the last sentence. I agree fully. Easiest would be a report button that saves the last 60 seconds of voice, analyzes it with ai and check if something illegal/harassing was said and autokicks the person who said it.

Would not require more top down systems.

[–] [email protected] 3 points 11 months ago (1 children)

I personally lean more towards humans for moderation, as words alone dont convey the full intent and meaning. And this cuts both ways, benign words can be used to harass.

But of course, humans are expensive, and recordings of voice chat have privacy implications.

[–] [email protected] 3 points 11 months ago (1 children)

generally, yes. But computers can take care of stuff very well at this point. Kicking someone for using the N-Word does not need meaning. Just dont use it, even if it is for educational purposes (inside a game-chat for example).

and recordings of voice chat have privacy implications.

I dont think we live in the same reality. over 30% in the US use Voice assistants that constantly listen in to their conversatoins (was just the first number I could find, I'm not from the US). Having a bot in a game VC chat store 1 minute of text for 1 minute for reporting purposes is like 0.00001% of what is going wrong with security stuff. Billions of people are getting analyzed, manipulated and whatnot on a daily basis. A reporting tool is not even the same game, let alone in the same ballpark in terms of privacy implications.

[–] [email protected] 3 points 11 months ago (1 children)

Yeah, AI to knock out the egregious stuff (n-bombs etc) is prefectly reasonable. But there is still a lot of harassment that can happen the really needs a human to interpret. Its a balance.

The privacy i am thinking of is the legal side of things. Google/FB/Apple are huge companies with the resources to work through the different legal requirements for every state and country. Google/FB/Apple can afford to just settle if anything goes wrong. A game studio cannot always do the same. As soon as you store a recording of a users voice, even temporarily, it opens up a lot of legal risks. Developers/publishers should still do it imo, but i dont think its something that can just be turned on without careful consideration.

[–] [email protected] 2 points 11 months ago

Good thought. Thanks for bringing it up.

[–] [email protected] -3 points 11 months ago (1 children)

Yeah that sounds totally reasonable and unintrusive, wtf. I don’t want my every word spoken in voice to be live analyzed by ai to see if I did a wrongthink.

Why not simply mute or kick if someone is being an asshole? Has served me well in all my years using discord or teamspeak.

[–] [email protected] 3 points 11 months ago (1 children)

Apart from what you‘re interpreting into my words, I said if someone is harassing you or speaking about lets say the things they did with their daughter yesterday, you can report them and have a computer look into it instead of a human.

Whatever privileges you have in your discord, you cant kick just anyone in every place. You either need privileges or a moderator to do it normally and my idea was to use AI to analyze the reported stuff.

[–] [email protected] 1 points 11 months ago (1 children)

I completely understand the sentiment of protecting children, but at the same time under that argument you can push the most dystopian and intrusive, overreaching legislature imaginable. It is the old balance of freedom versus safety, we can’t have complete safety without giving up all freedom.

And I think a constant ai driven monitoring of everything people say in the general vicinity of a microphone is very dystopian; which would be the eventual outcome of this.

[–] [email protected] 1 points 11 months ago (1 children)

I'm just gonna repeat myself since this is the most common answer I get in those topics:

The vast majority of people is being listened in on, analyzed and manipulated on a daily basis by far, far worse actors. Storing 1 minute of VC for 1 minute only accessible to this hypothetical bot *if someone reports them - facing wrongful report consequences themselves is not comparable to real privacy threats.

[–] [email protected] 1 points 11 months ago (1 children)

You don’t need to repeat yourself (and neither, be this condescending), I am well aware that this is happening to some degree already. Doesn’t mean I have to happily concede the little that is left.

[–] [email protected] 1 points 11 months ago (1 children)

You‘re again interpreting something into my words that I didnt say. Maybe try not to play the victim in every comment. It’s abrasive.

It’s not happening to some degree. Its happening left right and center. Denying that a computer would help with vc moderation does not help at all.

Good day.

[–] [email protected] 1 points 11 months ago

Right back at ya buddy. I’m not putting words in your mouth.

And no matter how often times you repeat it, my discord call doesn’t constitute a threat to public safety.