Meta plans to replace humans with AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent content
-
This post did not contain any content.
-
This post did not contain any content.
great idea..!
-
This post did not contain any content.
Great move for Facebook. It'll let them claim they're doing something to curb horrid content on the platform without actually doing anything.
-
This post did not contain any content.
Would be a shame if people had so sift through AI generated gore before the bots like and comment it. But seriously, good on them.
-
This post did not contain any content.
This might be the one time I'm okay with this. It's too hard on the humans that did this. I hope the AI won't "learn" to be cruel from this though, and I don't trust Meta to handle this gracefully.
-
This post did not contain any content.
Honestly, I've always thought the best use case for AI is moderating NSFL content online. No one should have to see that horrific shit.
-
Great move for Facebook. It'll let them claim they're doing something to curb horrid content on the platform without actually doing anything.
The marketing behind AI must feel like a runners high. “Something has AI”
-
This might be the one time I'm okay with this. It's too hard on the humans that did this. I hope the AI won't "learn" to be cruel from this though, and I don't trust Meta to handle this gracefully.
I mean, you could hire people who would otherwise enjoy the things they moderate. Keep em from doing shit themselves.
But, if all the sadists, psychos, and pedos were moderating, it would be reddit, I guess.
-
Honestly, I've always thought the best use case for AI is moderating NSFL content online. No one should have to see that horrific shit.
Not suitable for Lemmy?
-
This post did not contain any content.
Following Tumblr's lead, I see...
-
I mean, you could hire people who would otherwise enjoy the things they moderate. Keep em from doing shit themselves.
But, if all the sadists, psychos, and pedos were moderating, it would be reddit, I guess.
My guess is you dont know how bad it is. These people at Meta has real PTSD, and it would absolutly benefit everyone, if this in any way could be automatic with AI.
Next question is though, do you trust Meta to moderate? Nah, should be an independent AI, they couldnt tinker with.
-
This might be the one time I'm okay with this. It's too hard on the humans that did this. I hope the AI won't "learn" to be cruel from this though, and I don't trust Meta to handle this gracefully.
pretty common misconception about how “AI” works. models aren’t constantly learning. their weights are frozen before deployment. they can infer from context quite a bit, but they won’t meaningfully change without human intervention (for now)
-
This post did not contain any content.
Meta:
Here, AI. Watch all the horrible things humans are capable of and more for us. Make sure nothing gets through.
AI:
becomes SKYNET
-
Honestly, I've always thought the best use case for AI is moderating NSFL content online. No one should have to see that horrific shit.
What about false positives? Or a process to challenge them?
But yes, I agree with the general idea.
-
This post did not contain any content.
moderation on facebook? i'm sure it can be found right next to bigfoot
(other than automated immediate nipple removal)
-
This post did not contain any content.
Oh man, I may have to stop using this fascist sewer hose.
-
Meta:
Here, AI. Watch all the horrible things humans are capable of and more for us. Make sure nothing gets through.
AI:
becomes SKYNET
Ouija boards made of databases don't really think
-
This post did not contain any content.
A bold strategy, Cotton
-
Great move for Facebook. It'll let them claim they're doing something to curb horrid content on the platform without actually doing anything.
.
-
What about false positives? Or a process to challenge them?
But yes, I agree with the general idea.
Or a process to challenge them?