Skip to content

To explore AI bias, researchers pose a question: How do you imagine a tree?

Technology
20 11 0
  • I really like that it talks about the ontological systems that are completely and utterly disregarded by the models. But then the article whiffed and forgot all about how those systems could inform models only to talk about how it constrains them. The reality is the models do NOT consider any ontological basis beyond what is encoded in the language used to train them. What needs to be done is to allow the LLMs to somehow tap into ontological models as part of the process for generating responses. Then you could plug in different ontologies to make specialized systems.

    In theory something similar could be done with enough training. Guess what that would cost. Does enough clean water and energy exist to train it? Probably best not to find out, but techbros will try.

  • Wow, AI researchers are not only adopting philosophy jargon, but they're starting to cover some familiar territory. That is the difference between signifier (language) and signified (reality).

    The problem is that spoken language is vague, colloquial, and subjective. Therefore spoken language can never produce something specific, universal, or objective.

    I deep dived into AI research when the bubble first started with chatgpt 3.5. It turns out, most AI researchers are philosophers. Because thus far, there was very little tech wise elements to discuss. Neural networks and machine learning were very basic and a lot of proposals were theoretical. Generative AI as LLMs and image generators were philosophical proposals before real technological prototypes were built. A lot of it comes from epistemology analysis mixed in with neuroscience and devops. It's a relatively new trend that the wallstreet techbros have inserted themselves and dominated the space.

  • When the Generative Agents system was evaluated for how “believably human” the agents acted, researchers found the AI versions scored higher than actual human actors.

    That's a neat finding. I feel like there's a lot to unpack there around how our expectations are formed.

    Or how we operationalize and interpret information from studies. You might think you're measuring something according to a narrow definition and operationalization of the measurement. But that doesn't guarantee that that's what you are actually getting. It's more an epistemological and philosophical issue. What is "believable human"? And how do you measure it? It's a rabbit hole in and of itself.

  • AI is getting a much more widespread use than people with a technical background. So its application, namely in education but in all other non-CS disciplines will be through people with limited understanding of the biases. It is importing them to make them explicit, to underline that an LLM will produce the same biases it deduced from testing data and its loss function. But lots functions and test data are not public knowledge, studies need to be performed to understand how the coders’ own biases influenced the LLM scheme itself.

    A photo has less bias because we know what it is representing: a photo only shows what can be seen. But the same understanding is not clear AI. Why showing a photo-realistic tree versus a biological diagram? Choices have been made, of which a broader audience needs to be aware of.

    A photo has less bias because we know what it is representing: a photo only shows what can be seen.

    i agree with you on ai but the above statement is ignoring what photography is and biases intrinsic to it.

    You see, that understanding you expect to be developed for ai is not there for you for photography.

  • Thus, a user receives an answer that has already undergone a filtering of sorts.

    Wouldn't this be an expected trait of a system predicting next most likely token based on lossy compression of specific datasets and other lossy optimization?

    Depends. For an expert, that is self evident (even if it might not be clear which biases have been incorporated). But that is not how it has been marketed. Chatgpt and similar are perceived as answering “the truth” at all times, and that skews the user’s understanding of the answers. Researching how deeply the answers are affected by the coders’ bias is the focus of their research and a worthwhile undertaking to avoid overlooking something important

  • A photo has less bias because we know what it is representing: a photo only shows what can be seen.

    i agree with you on ai but the above statement is ignoring what photography is and biases intrinsic to it.

    You see, that understanding you expect to be developed for ai is not there for you for photography.

    If you want, any work that does not encompass the whole world is applying a filter and therefore a bias of some sort. We don’t expect a photo to X-ray the roots of a tree, because we understand the physical constraints of photography. Sure, something could be just out of frame, something else could have been photoshopped out, you can create a different story by selecting different photos and so on. But we understand the “what” a photo represents. I doubt we have the dang understanding of “what” an LLM represents, what are the constraints of the possible answers, and we definitely don’t understand why a specific answer is chosen over the infinite other possibilities.

  • In theory something similar could be done with enough training. Guess what that would cost. Does enough clean water and energy exist to train it? Probably best not to find out, but techbros will try.

    I don't think a logical system like an ontology is really capable of being represented in neural networks with any real fidelity.

  • I don't think a logical system like an ontology is really capable of being represented in neural networks with any real fidelity.

    Well it does great with completely illogical systems. I wonder if one can be used for a random seed? 🤔

  • Depends. For an expert, that is self evident (even if it might not be clear which biases have been incorporated). But that is not how it has been marketed. Chatgpt and similar are perceived as answering “the truth” at all times, and that skews the user’s understanding of the answers. Researching how deeply the answers are affected by the coders’ bias is the focus of their research and a worthwhile undertaking to avoid overlooking something important

    For an expert, that is self evident

    I am far from an expert, but it seemed obvious to ne.

  • For an expert, that is self evident

    I am far from an expert, but it seemed obvious to ne.

    I teach, nothing is evident to anyone 😭

  • Microsoft reportedly fixing SSD failures caused by Windows updates

    Technology technology
    26
    102 Stimmen
    26 Beiträge
    82 Aufrufe
    S
    I played it like two weeks ago, you need to turn on compatibility mode explicitly or it will use the Linux version which doesn't have online. I have the steam version no idea if the epic version is different
  • 917 Stimmen
    243 Beiträge
    3k Aufrufe
    kalkulat@lemmy.worldK
    OK FC, come at me! But be warned, I've been armed by Hancock! AND Pink Floyd!
  • datacenter liquid cooling solution

    Technology technology
    35
    34 Stimmen
    35 Beiträge
    267 Aufrufe
    S
    A legit exception might occur with a strict energy-optimization objective, where the point would be transporting heat outside of an HVAC envelope as efficiently as possible. The cost of the additional thermal load is often ignored by hobbyists in their energy calculations but it can be significant. In the context of fixed-capacity solar, for example, it might be cheaper to pipe waste heat from a telco closet to a space that isn’t climate controlled, like a garage, than it would be to expand the solar installation for increased HVAC draw.
  • How to transform your Neovim to Cursor in minutes - Composio

    Technology technology
    1
    1
    4 Stimmen
    1 Beiträge
    23 Aufrufe
    Niemand hat geantwortet
  • Bill Gates and Linus Torvalds meet for the first time.

    Technology technology
    44
    2
    441 Stimmen
    44 Beiträge
    511 Aufrufe
    ?
    That must have taken some diplomacy, but it would have been even more impressive to have convinced Stallman to come too
  • 284 Stimmen
    46 Beiträge
    666 Aufrufe
    E
    NGL, it would be great if they could make it work and go fuck off into international waters. Unrelated, but did you know that if you put big enough holes in a ship it'll sink?
  • 325 Stimmen
    20 Beiträge
    192 Aufrufe
    roofuskit@lemmy.worldR
    It's extremely traceable. There is a literal public ledger if every single transaction.
  • 0 Stimmen
    4 Beiträge
    49 Aufrufe
    K
    Only way I'll want a different phone brand is if it comes with ZERO bloatware and has an excellent internal memory/storage cleanse that has nothing to do with Google's Files or a random app I'm not sure I can trust without paying or rooting. So far my A series phones do what I need mostly and in my opinion is superior to the Motorola's my fiancé prefers minus the phone-phone charge ability his has, everything else I'm just glad I have enough control to tweak things to my liking, however these days Samsungs seem to be infested with Google bloatware and apps that insist on opening themselves back up regardless of the widespread battery restrictions I've assigned (even was sent a "Stop Closing my Apps" notif that sent me to an article ) short of Disabling many unnecessary apps bc fully rooting my devices is something I rarely do anymore. I have a random Chinese brand tablet where I actually have more control over the apps than either of my A series phones whee Force Stopping STAYS that way when I tell them to! I hate being listened to for ads and the unwanted draining my battery life and data (I live off-grid and pay data rates because "Unlimited" is some throttled BS) so my ability to control what's going on in the background matters a lot to me, enough that I'm anti Meta-apps and avoid all non-essential Google apps. I can't afford topline phones and the largest data plan, so I work with what I can afford and I'm sad refurbished A lines seem to be getting more expensive while giving away my control to companies. Last A line I bought that was supposed to be my first 5G phone was network locked, so I got ripped off, but it still serves me well in off-grid life. Only app that actually regularly malfunctions when I Force Stop it's background presence is Roku, which I find to have very an almost insidious presence in our lives. Google Play, Chrome, and Spotify never acts incompetent in any way no matter how I have to open the setting every single time I turn Airplane Mode off. Don't need Gmail with Chrome and DuckDuckGo has been awesome at intercepting self-loading ads. I hope one day DDG gets better bc Google seems to be terrible lately and I even caught their AI contradicting itself when asking about if Homo Florensis is considered Human (yes) and then asked the oldest age of human remains, and was fed the outdated narrative of 300,000 years versus 700,000+ years bipedal pre-humans have been carbon dated outside of the Cradle of Humanity in South Africa. SO sorry to go off-topic, but I've got a big gripe with Samsung's partnership with Google, especially considering the launch of Quantum Computed AI that is still being fine-tuned with company-approved censorships.