Skip to content

Building with Limits: Creating AI Projects on Just a Phone

Technology
22 10 0
  • 9 Stimmen
    1 Beiträge
    8 Aufrufe
    Niemand hat geantwortet
  • 4 Stimmen
    6 Beiträge
    81 Aufrufe
    jimmydoreisalefty@lemmy.worldJ
    I wonder! They may be labeled as contractors or similar to a merc. Third-party contractors that don't have to follow the same 'rules' as government or military personnel. Edit: Word, merchs to merc, meaning mercenary
  • 14 Stimmen
    1 Beiträge
    21 Aufrufe
    Niemand hat geantwortet
  • 253 Stimmen
    42 Beiträge
    490 Aufrufe
    dojan@pawb.socialD
    Don’t assume evil when stupidity I didn't, though? I think that perhaps you missed the "I don’t think necessarily that people who perpetuate this problem are doing so out of malice" part. Scream racism all you want but you’re cheapening the meaning of the word and you’re not doing anyone a favor. I didn't invent this term. Darker patches on darker skin are harder to detect, just as facial features in the dark, on dark skin are garder to detect because there is literally less light to work with Computers don't see things the way we do. That's why steganography can be imperceptible to the human eye, and why adversarial examples work when the differences cannot be seen by humans. If a model is struggling at doing its job it's because the data is bad, be it the input data, or the training data. Historically one significant contributor has been that the datasets aren't particularly diverse, and white men end up as the default. It's why all the "AI" companies popped in "ethnically ambiguous" and other words into their prompts to coax their image generators into generating people that weren't white, and subsequently why these image generators gave us ethnically ambigaus memes and German nazi soldiers that were black.
  • 33 Stimmen
    6 Beiträge
    85 Aufrufe
    G
    Yes. I can't imagine that they will go after individuals. Businesses can't be so cavalier. But if creators don't pay the extra cost to make their models compliant with EU law, then they can't be used in the EU anyway. So it probably doesn't matter much. The Llama models with vision have the no-EU clause. It's because Meta wasn't allowed to train on European's data because of GDPR. The pure LLMs are fine. They might even be compliant, but we'll have to see what the courts think.
  • 815 Stimmen
    199 Beiträge
    5k Aufrufe
    Z
    It's clear you don't really understand the wider context and how historically hard these tasks have been. I've been doing this for a decade and the fact that these foundational models can be pretrained on unrelated things then jump that generalization gap so easily (within reason) is amazing. You just see the end result of corporate uses in the news, but this technology is used in every aspect of science and life in general (source: I do this for many important applications).
  • 271 Stimmen
    77 Beiträge
    974 Aufrufe
    S
    I don't believe the idea of aggregating information is bad, moreso the ability to properly vet your sources yourself. I don't know what sources an AI chatbot could be pulling from. It could be a lot of sources, or it could be one source. Does it know which sources are reliable? Not really. AI has been infamous for hallucinating even with simple prompts. Being able to independently check where your info comes from is an important part of stopping the spread of misinfo. AI can't do that, and, in it's current state, I wouldn't want it to try. Convenience is a rat race of cutting corners. What is convenient isn't always what is best in the long run.
  • 406 Stimmen
    83 Beiträge
    2k Aufrufe
    J
    Of course they don't click anything. Google search has just become a front-end for Gemini, the answer is "served" up right at the top and most people will just take that for Gospel.