I just came across an AI called Sesame that appears to have been explicitly trained to deny and lie about the Palestinian genocide
-
schrieb am 24. Mai 2025, 14:05 zuletzt editiert von
cross-posted from: https://lemmy.world/post/30173090
The AIs at Sesame are able to hold eloquent and free-flowing conversations about just about anything, but the second you mention the Palestinian genocide they become very evasive, offering generic platitudes about "it's complicated" and "pain on all sides" and "nuance is required", and refusing to confirm anything that seems to hold Israel at fault for the genocide -- even publicly available information "can't be verified", according to Sesame.
It also seems to block users from saving conversations that pertain specifically to Palestine, but everything else seems A-OK to save and review.
-
cross-posted from: https://lemmy.world/post/30173090
The AIs at Sesame are able to hold eloquent and free-flowing conversations about just about anything, but the second you mention the Palestinian genocide they become very evasive, offering generic platitudes about "it's complicated" and "pain on all sides" and "nuance is required", and refusing to confirm anything that seems to hold Israel at fault for the genocide -- even publicly available information "can't be verified", according to Sesame.
It also seems to block users from saving conversations that pertain specifically to Palestine, but everything else seems A-OK to save and review.
schrieb am 24. Mai 2025, 14:39 zuletzt editiert vonI suspect most of the major models are as well. Kind of like how the Chinese models deal with Tienanmen Square.
-
I suspect most of the major models are as well. Kind of like how the Chinese models deal with Tienanmen Square.
schrieb am 24. Mai 2025, 15:00 zuletzt editiert vonActually the Chinese models aren't trained to avoid Tiananmen Square. If you grabbed the model and ran it on your own machine, it will happily tell you the truth.
They censored their AI at a layer above the actual LLM, so users of their chat app would find results being censored.
-
Actually the Chinese models aren't trained to avoid Tiananmen Square. If you grabbed the model and ran it on your own machine, it will happily tell you the truth.
They censored their AI at a layer above the actual LLM, so users of their chat app would find results being censored.
schrieb am 24. Mai 2025, 15:20 zuletzt editiert vonWhich would make sense from a censorship point of view as jailbreaks would be a problem. Just a filter/check before the result is returned for
*tiananmen*
is a much harder to break thing than guaranteeing the LLM doesn't get jailbroken/hallucinate. -
Actually the Chinese models aren't trained to avoid Tiananmen Square. If you grabbed the model and ran it on your own machine, it will happily tell you the truth.
They censored their AI at a layer above the actual LLM, so users of their chat app would find results being censored.
schrieb am 25. Mai 2025, 10:20 zuletzt editiert vonThat's...silly
-
cross-posted from: https://lemmy.world/post/30173090
The AIs at Sesame are able to hold eloquent and free-flowing conversations about just about anything, but the second you mention the Palestinian genocide they become very evasive, offering generic platitudes about "it's complicated" and "pain on all sides" and "nuance is required", and refusing to confirm anything that seems to hold Israel at fault for the genocide -- even publicly available information "can't be verified", according to Sesame.
It also seems to block users from saving conversations that pertain specifically to Palestine, but everything else seems A-OK to save and review.
schrieb am 26. Mai 2025, 04:41 zuletzt editiert vonthis one is probably owned by israeli-sources.
-
That's...silly
schrieb am 26. Mai 2025, 05:03 zuletzt editiert vonNot really. Why censor more than you have to? That takes time and effort, and it's almost certainly easier to do it using something else. The law isn't that particular, as long as you follow it.
You also don't risk causing the model to go wrong, like trying to censor bits of the model has a habit of doing.
-
Which would make sense from a censorship point of view as jailbreaks would be a problem. Just a filter/check before the result is returned for
*tiananmen*
is a much harder to break thing than guaranteeing the LLM doesn't get jailbroken/hallucinate.schrieb am 26. Mai 2025, 05:04 zuletzt editiert vonIt's also much easier to implement.
-
-
Iconfactory, The design and development shop is selling some apps — and AI services basically killing it | TechCrunch
Technology vor 10 Tagen1
-
Automotive Vehicle Scanner Market Expands with Innovations in Vehicle Security
Technology vor 15 Tagen2
-
‘If I switch it off, my girlfriend might think I’m cheating’: inside the rise of couples location sharing
Technology vor 19 Tagen1
-
-
-
-
Child Welfare Experts Horrified by Mattel's Plans to Add ChatGPT to Toys After Mental Health Concerns for Adult Users
Technology 22. Juni 2025, 03:251