Skip to content

[JS Required] The OpenAI Files Document Broken Promises, Safety Compromises, Conflicts of Interest, and Leadership Concerns

Technology
3 3 41
  • Major Areas of Concern:

    ::: spoiler Restructuring: Analysis of planned changes to the nonprofit's relationship with its for-profit subsidiary

    • OpenAI plans to remove limits on investor returns: OpenAI once capped investor profits at a maximum of 100x to ensure that, if the company succeeds in building AI capable of automating all human labor, the proceeds would go to humanity. They have now announced plans to remove that cap.
    • OpenAI portrays itself as preserving nonprofit control while potentially disempowering the nonprofit: OpenAI claims to have reversed course on a decision to abandon nonprofit control, but the details suggest that the nonprofit’s board would no longer have all the authority it would need to hold OpenAI accountable to its mission.
    • Investors pressured OpenAI to make structural changes: OpenAI has admitted that it is making these changes to appease investors who have made their funding conditional on structural reforms, including allowing unlimited returns—exactly the type of investor influence OpenAI’s original structure was designed to prevent.
      :::

    ::: spoiler CEO Integrity: Concerns regarding leadership practices and misleading representations from OpenAI CEO Sam Altman

    • Senior employees have attempted to remove Altman at each of the three major companies he has run: Senior employees at Altman’s first startup twice urged the board to remove him as CEO over “deceptive and chaotic” behavior, while at Y Combinator, he was forced out and accused of absenteeism and prioritizing personal enrichment.
    • Altman claimed ignorance of a scheme to coerce employees into ultra-restrictive NDAs: However, he signed documents giving OpenAI the authority to revoke employees’ vested equity if they didn’t sign the NDAs.
    • Altman repeatedly lied to board members: For example, Altman stated that the legal team had approved a safety process exemption when they had not, and he reported that one board member wanted another board member removed when that was not the case.
      :::

    ::: spoiler Transparency & Safety: Concerns regarding safety processes, transparency, and organizational culture at OpenAI

    • OpenAI coerced employees into signing highly restrictive NDAs threatening their vested equity: Former OpenAI employees faced highly restrictive non-disclosure and non-disparagement agreements that threatened the loss of all vested equity if they ever criticized the company, even after resigning.
    • OpenAI has rushed safety evaluation processes: OpenAI rushed safety evaluations of its AI models to meet product deadlines and significantly cut the time and resources dedicated to safety testing.
    • OpenAI insiders described a culture of recklessness and secrecy: OpenAI employees have accused the company of not living up to its commitments and systematically discouraging employees from raising concerns.
      :::

    ::: spoiler Conflicts of Interest: Documenting potential conflicts of interest of OpenAI board members

    • OpenAI’s nonprofit board has multiple seemingly unaddressed conflicts of interest: While OpenAI defines ‘independent’ directors as those without OpenAI equity, the board appears to overlook conflicts from members' external investments in companies that benefit from OpenAI partnerships.
    • CEO Sam Altman downplayed his financial interest in OpenAI: Despite once claiming to have no personal financial interest in OpenAI, much of Altman’s $1.6 billion net worth is spread across investments in OpenAI partners including Retro Biosciences and Rewind AI, which stand to benefit from the company’s continued growth.
    • No recusals announced for critical restructuring decision: Despite these conflicts, OpenAI has not announced any board recusals for the critical decision of whether they will restructure and remove profit caps, unlocking billions of dollars in new investment.
      :::
  • Major Areas of Concern:

    ::: spoiler Restructuring: Analysis of planned changes to the nonprofit's relationship with its for-profit subsidiary

    • OpenAI plans to remove limits on investor returns: OpenAI once capped investor profits at a maximum of 100x to ensure that, if the company succeeds in building AI capable of automating all human labor, the proceeds would go to humanity. They have now announced plans to remove that cap.
    • OpenAI portrays itself as preserving nonprofit control while potentially disempowering the nonprofit: OpenAI claims to have reversed course on a decision to abandon nonprofit control, but the details suggest that the nonprofit’s board would no longer have all the authority it would need to hold OpenAI accountable to its mission.
    • Investors pressured OpenAI to make structural changes: OpenAI has admitted that it is making these changes to appease investors who have made their funding conditional on structural reforms, including allowing unlimited returns—exactly the type of investor influence OpenAI’s original structure was designed to prevent.
      :::

    ::: spoiler CEO Integrity: Concerns regarding leadership practices and misleading representations from OpenAI CEO Sam Altman

    • Senior employees have attempted to remove Altman at each of the three major companies he has run: Senior employees at Altman’s first startup twice urged the board to remove him as CEO over “deceptive and chaotic” behavior, while at Y Combinator, he was forced out and accused of absenteeism and prioritizing personal enrichment.
    • Altman claimed ignorance of a scheme to coerce employees into ultra-restrictive NDAs: However, he signed documents giving OpenAI the authority to revoke employees’ vested equity if they didn’t sign the NDAs.
    • Altman repeatedly lied to board members: For example, Altman stated that the legal team had approved a safety process exemption when they had not, and he reported that one board member wanted another board member removed when that was not the case.
      :::

    ::: spoiler Transparency & Safety: Concerns regarding safety processes, transparency, and organizational culture at OpenAI

    • OpenAI coerced employees into signing highly restrictive NDAs threatening their vested equity: Former OpenAI employees faced highly restrictive non-disclosure and non-disparagement agreements that threatened the loss of all vested equity if they ever criticized the company, even after resigning.
    • OpenAI has rushed safety evaluation processes: OpenAI rushed safety evaluations of its AI models to meet product deadlines and significantly cut the time and resources dedicated to safety testing.
    • OpenAI insiders described a culture of recklessness and secrecy: OpenAI employees have accused the company of not living up to its commitments and systematically discouraging employees from raising concerns.
      :::

    ::: spoiler Conflicts of Interest: Documenting potential conflicts of interest of OpenAI board members

    • OpenAI’s nonprofit board has multiple seemingly unaddressed conflicts of interest: While OpenAI defines ‘independent’ directors as those without OpenAI equity, the board appears to overlook conflicts from members' external investments in companies that benefit from OpenAI partnerships.
    • CEO Sam Altman downplayed his financial interest in OpenAI: Despite once claiming to have no personal financial interest in OpenAI, much of Altman’s $1.6 billion net worth is spread across investments in OpenAI partners including Retro Biosciences and Rewind AI, which stand to benefit from the company’s continued growth.
    • No recusals announced for critical restructuring decision: Despite these conflicts, OpenAI has not announced any board recusals for the critical decision of whether they will restructure and remove profit caps, unlocking billions of dollars in new investment.
      :::

    You’re telling me that Sam ConvenientLastname is a shitty person and the company they run is also a shitty thing? Say it ain’t so!!!

  • Major Areas of Concern:

    ::: spoiler Restructuring: Analysis of planned changes to the nonprofit's relationship with its for-profit subsidiary

    • OpenAI plans to remove limits on investor returns: OpenAI once capped investor profits at a maximum of 100x to ensure that, if the company succeeds in building AI capable of automating all human labor, the proceeds would go to humanity. They have now announced plans to remove that cap.
    • OpenAI portrays itself as preserving nonprofit control while potentially disempowering the nonprofit: OpenAI claims to have reversed course on a decision to abandon nonprofit control, but the details suggest that the nonprofit’s board would no longer have all the authority it would need to hold OpenAI accountable to its mission.
    • Investors pressured OpenAI to make structural changes: OpenAI has admitted that it is making these changes to appease investors who have made their funding conditional on structural reforms, including allowing unlimited returns—exactly the type of investor influence OpenAI’s original structure was designed to prevent.
      :::

    ::: spoiler CEO Integrity: Concerns regarding leadership practices and misleading representations from OpenAI CEO Sam Altman

    • Senior employees have attempted to remove Altman at each of the three major companies he has run: Senior employees at Altman’s first startup twice urged the board to remove him as CEO over “deceptive and chaotic” behavior, while at Y Combinator, he was forced out and accused of absenteeism and prioritizing personal enrichment.
    • Altman claimed ignorance of a scheme to coerce employees into ultra-restrictive NDAs: However, he signed documents giving OpenAI the authority to revoke employees’ vested equity if they didn’t sign the NDAs.
    • Altman repeatedly lied to board members: For example, Altman stated that the legal team had approved a safety process exemption when they had not, and he reported that one board member wanted another board member removed when that was not the case.
      :::

    ::: spoiler Transparency & Safety: Concerns regarding safety processes, transparency, and organizational culture at OpenAI

    • OpenAI coerced employees into signing highly restrictive NDAs threatening their vested equity: Former OpenAI employees faced highly restrictive non-disclosure and non-disparagement agreements that threatened the loss of all vested equity if they ever criticized the company, even after resigning.
    • OpenAI has rushed safety evaluation processes: OpenAI rushed safety evaluations of its AI models to meet product deadlines and significantly cut the time and resources dedicated to safety testing.
    • OpenAI insiders described a culture of recklessness and secrecy: OpenAI employees have accused the company of not living up to its commitments and systematically discouraging employees from raising concerns.
      :::

    ::: spoiler Conflicts of Interest: Documenting potential conflicts of interest of OpenAI board members

    • OpenAI’s nonprofit board has multiple seemingly unaddressed conflicts of interest: While OpenAI defines ‘independent’ directors as those without OpenAI equity, the board appears to overlook conflicts from members' external investments in companies that benefit from OpenAI partnerships.
    • CEO Sam Altman downplayed his financial interest in OpenAI: Despite once claiming to have no personal financial interest in OpenAI, much of Altman’s $1.6 billion net worth is spread across investments in OpenAI partners including Retro Biosciences and Rewind AI, which stand to benefit from the company’s continued growth.
    • No recusals announced for critical restructuring decision: Despite these conflicts, OpenAI has not announced any board recusals for the critical decision of whether they will restructure and remove profit caps, unlocking billions of dollars in new investment.
      :::

    There is nothing open about openai, and that was obvious way before they released chatgpt.

  • Microsoft C++ static analysis tool bolsters warning suppressions

    Technology technology
    1
    1
    17 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • escorte paris

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    17 Aufrufe
    Niemand hat geantwortet
  • Why do AI company logos look like buttholes?

    Technology technology
    5
    1
    36 Stimmen
    5 Beiträge
    58 Aufrufe
    ivanafterall@lemmy.worldI
    It's a nascent industry standard called The Artificial Intelligence Network Template, or TAINT.
  • Windows 11 finally overtakes Windows 10 [in marketshare]

    Technology technology
    32
    1
    63 Stimmen
    32 Beiträge
    302 Aufrufe
    H
    Yeah, and its most likely only due to them killing Windows 10 in the fall, which means a lot of companies have been working hard this year to replace a ton of computers before October. Anyone who has been down this road with 7 to 10 knows it will just cost more money if you need to continue support after that. They sell you a new license thats good for a year that will allow updates to continue. It doubles in cost every year after.
  • How Cops Can Get Your Private Online Data

    Technology technology
    5
    1
    107 Stimmen
    5 Beiträge
    49 Aufrufe
    M
    Private and online doesn't mix. Except if it's encrypted.
  • AI Pressure from the Top: CEOs Urge Workers to Adapt

    Technology technology
    1
    1
    1 Stimmen
    1 Beiträge
    15 Aufrufe
    Niemand hat geantwortet
  • The Quantum Tech Renaissance: Are We Ready?

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    18 Aufrufe
    Niemand hat geantwortet
  • AI cheating surge pushes schools into chaos

    Technology technology
    25
    45 Stimmen
    25 Beiträge
    218 Aufrufe
    C
    Sorry for the late reply, I had to sit and think on this one for a little bit. I think there are would be a few things going on when it comes to designing a course to teach critical thinking, nuances, and originality; and they each have their own requirements. For critical thinking: The main goal is to provide students with a toolbelt for solving various problems. Then instilling the habit of always asking "does this match the expected outcome? What was I expecting?". So usually courses will be setup so students learn about a tool, practice using the tool, then have a culminating assignment on using all the tools. Ideally, the problems students face at the end require multiple tools to solve. Nuance mainly naturally comes with exposure to the material from a professional - The way a mechanical engineer may describe building a desk will probably differ greatly compared to a fantasy author. You can also explain definitions and industry standards; but thats really dry. So I try to teach nuances via definitions by mixing in the weird nuances as much as possible with jokes. Then for originality; I've realized I dont actually look for an original idea; but something creative. In a classroom setting, you're usually learning new things about a subject so a student's knowledge of that space is usually very limited. Thus, an idea that they've never heard about may be original to them, but common for an industry expert. For teaching originality creativity, I usually provide time to be creative & think, and provide open ended questions as prompts to explore ideas. My courses that require originality usually have it as a part of the culminating assignment at the end where they can apply their knowledge. I'll also add in time where students can come to me with preliminary ideas and I can provide feedback on whether or not it passes the creative threshold. Not all ideas are original, but I sometimes give a bit of slack if its creative enough. The amount of course overhauling to get around AI really depends on the material being taught. For example, in programming - you teach critical thinking by always testing your code, even with parameters that don't make sense. For example: Try to add 123 + "skibbidy", and see what the program does.