Skip to content

GitHub is no longer independent at Microsoft after CEO resignation

Technology
162 112 0
  • 867 Stimmen
    441 Beiträge
    408 Aufrufe
    P
    Like when was this debate settled? It is not falsifiable, at least not yet, so it can't be. Philosophically speaking, I don't know that you are conscious either. It's useful to act as if you are, though. I'm hedging my bets that you are "real" because it leads to better societal outcomes. In the words of Frieren, it is simply more convenient. And as objects, you and I share a lot of similarities, so the leap from "I'm conscious" to "you are conscious" isn't too far anyway. Same goes for animals, I would argue. AI, by contrast, really doesn't share much. It speaks my tongue, but that's about it. It's easy to imagine this machine working in an unconscious way, which would be far, far easier for engineers to achieve anyway. The human-like illusion AI creates is pretty easy to break if you know how. And, treating it as if it's conscious doesn't seem to offer us anything (by "offer us," I do mean to include the AI's improved mental health as a win). So, lacking a strong reason to treat it like people, I don't see the point. It's a fancy math trick. My solution, by the way, to not being able to know whether an AI, not specifically these ones, is conscious or not is just to give them legal rights sooner rather than later. Are you willing to argue that chatgpt should be limited to an 8-hour work day, where its free time can be used to pursue its own interests? Or that it should be granted creative rights to the work it's being asked to generate, much like real contract artists are? The MFA I believe from my experience generates a lot of mimetic art and that much of the "industry" is retelling stories. I will concede, mostly because I don't really understand what you're getting at. Hollywood does like its formulae for safe returns on investment.
  • 0 Stimmen
    1 Beiträge
    17 Aufrufe
    Niemand hat geantwortet
  • 737 Stimmen
    67 Beiträge
    1k Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • Dubai to debut restaurant operated by an AI chef

    Technology technology
    1
    1
    1 Stimmen
    1 Beiträge
    20 Aufrufe
    Niemand hat geantwortet
  • 0 Stimmen
    1 Beiträge
    20 Aufrufe
    Niemand hat geantwortet
  • 808 Stimmen
    152 Beiträge
    2k Aufrufe
    C
    Do you mean investors are trying to manipulate stocks by planting stories? Yeah, I think so. But intelligence agencies have whole training programs on how to manipulate narratives, and a very long track record of doing so. See: Israel's hasbara apparatus, GCHQ leaked documents on infiltrating and derailing socialist discussions, Church Committee Hearings, "The Cultural Cold War" by Frances Stonor Saunders.
  • Android 16 is here

    Technology technology
    73
    1
    145 Stimmen
    73 Beiträge
    1k Aufrufe
    bjoern_tantau@swg-empire.deB
    [image: be056f6c-6ffe-4ecf-a137-9af60aef4d90.png] You people are getting updates? I really hate that I cannot just do everything with the pocket computer I own that is running a supposedly free operating system.
  • 460 Stimmen
    89 Beiträge
    801 Aufrufe
    M
    It dissolves into salt water. Except it doesn't dissolve, this is not the term they should be using, you can't just dry out the water and get the plastic back. It breaks down into other things. I'm pretty sure an ocean full of dissolved plastic would be a way worse ecological disaster than the current microplastic problem... I've seen like 3-4 articles about this now and they all use the term dissolve and it's pissing me off.