OpenAI wins $200m contract with US military for ‘warfighting’
-
This post did not contain any content.
-
This post did not contain any content.
I'm really curious what brainrot is in that man's head.
-
This post did not contain any content.
the war they are fighting is against you .
-
This post did not contain any content.
OpenAI wants the money and the military wants to never have to deal with accountability. That way when they bomb a wherever they want and just say "it wasn't my decision, it was the AI" and then OpenAI can say "we need more money to make it more reliable. Also we need more training data from the military so it won't happen again, can we have it all?"
-
This post did not contain any content.
Use of the terms "warfighter" or "warfighting" is one of the biggest red flags in my life due to the industry I'm in. Big cringe. Might as well just say "I wanna make the world more White and Christian. I'm not in the military but love tacticool fashion. 'Murrica."
-
This post did not contain any content.
OpenAI’s core message was “we can’t release our GPT model because people will try to use it for war”.
Fucking hypocrites.
-
This post did not contain any content.
We gunna have to start holding programmers accountable for war crimes.
-
We gunna have to start holding programmers accountable for war crimes.
Did you live under the impression that all the smart missiles, smart guns, smart everything didn't already require programmers?
-
I'm really curious what brainrot is in that man's head.
There is a device that allows for the head to be easily separated for a more thorough analysis. I say we start building some.
-
We gunna have to start holding programmers accountable for war crimes.
For that, they'll need to license and protect the profession like other engineering vocations..
-
I'm really curious what brainrot is in that man's head.
When you pay money to be at the Nazi inaugration, it shouldn't be a surprise that you accepted the Nazi blood money.
-
This post did not contain any content.
-
OpenAI wants the money and the military wants to never have to deal with accountability. That way when they bomb a wherever they want and just say "it wasn't my decision, it was the AI" and then OpenAI can say "we need more money to make it more reliable. Also we need more training data from the military so it won't happen again, can we have it all?"
I dreamed of a moment when further existence of the society without clear and non-ambiguous personal responsibility will be impossible.
This is that. In olden days, even if an apparatus made a decision, it still consisted of people. Now it's possible for the mechanism to not involve people. Despite making garbage decisions, that's something new, or, to be more precise, something forgotten too long ago - of the times of fortunetelling on birds' intestines and lambs' bones for strategic decisions. I suppose in those times such fortunetelling was a mechanism to make a decision random enough, thus avoiding dangerous predictability and traitors affecting decisions.
The problem with AI or "AI" is that it's not logically the same as that fortunetelling.
And also, about personal responsibility ... in ancient Greece (and Rome) unfortunate result of such decision-making was not blamed on gods, it was blamed on the leader - their lack of favor with gods, or maybe the fortuneteller - for failing to interpret gods' will, which, in case they could affect the result, correct. Or sometimes the whole unit, or the whole army, or the whole city-state. So the main trait of any human mechanism, for there to be a responsible party, was present.
Either a clearly predictable set of people in the company providing the program, or the operator, or the officer making decisions, or all of them, should be responsible when using an "AI" to at least match this old way.
-
There is a device that allows for the head to be easily separated for a more thorough analysis. I say we start building some.
Ah, I see I've found a fellow member of la révolution. I applaud your scientific curiosity.
-
This post did not contain any content.
-
Use of the terms "warfighter" or "warfighting" is one of the biggest red flags in my life due to the industry I'm in. Big cringe. Might as well just say "I wanna make the world more White and Christian. I'm not in the military but love tacticool fashion. 'Murrica."
Or might as well say "Yes, I like money and want to sell to the DoD." Source: may have used it in a slide deck once. Not actually sure, as the phrase wasnt as popular back then.
-
Did you live under the impression that all the smart missiles, smart guns, smart everything didn't already require programmers?
They were not making targeting decisions.
-
OpenAI’s core message was “we can’t release our GPT model because people will try to use it for war”.
Fucking hypocrites.
"If people use it for war then people won't pay us to use it for war"
-
OpenAI’s core message was “we can’t release our GPT model because people will try to use it for war”.
Fucking hypocrites.
Capitalists will do anything for money. Nothing is off the table.
-
This post did not contain any content.
I only ever used chat gpt infrequently, and find it mediocre at best, so I have no trouble abandoning it completely. Not giving them any more free training.