Skip to content

PNG has been updated for the first time in 22 years — new spec supports HDR and animation

Technology
88 58 0
  • It's not irrelevant, it's that you don't actually know if it's true or not, so it's not a valuable contribution.

    If you started your comment by saying "This is something I completely made up and may or may not be correct" and then posted the same thing, you should expect the same result.

    I did check some of the references.

    What I dont understand is why you would perceive this content as more trustworthy if I didn't say it's AI.

    Nobody should trust blindly some anonymous comment on a forum. I have to check what the AI blurbs out but you can just gobble the comment of some stranger without exercising yourself some critical thinking?

    As long as I'm transparent on the source and especially since I did check some of it to be sure it's not some kind of hallucination...

    There shouldn't be any difference of trust between some random comment on a social network and what some AI model thinks on a subject.

    Also it's not like this is some important topic with societal implications. It's just a technical question that I had (and still doesn't) that doesn't mandate researching. None of my work depends on that lib. So before my comment there was no information on compatibility. Now there is but you have to look at it critically and decide if you want to verify or trust it.

    That's why I regret this kind of stubborn downvoting where people just assume the worse instead of checking the actual data.

    Sometime I really wonder if I'm the only one supposed to check my data? Aren't everybody here capable of verifying the AI output if they think it's worth the time and effort?

    Basically, downvoting here is choosing "no information" rather than "information I have to verify because it's AI generated".

    Edit: Also I could have just summarized the AI output myself and not mention AI. What then? Would you have checked the accuracy of that data? Critical thinking is not something you use "sometimes" or just "on some comments".

  • Likely, you'll see the first frame only on older software. Encoding animation in a dedicated animation chunk and using the base spec for the first keyframe sounds like the sane thing to do, so they likely did that.

    I'm not going to look into it now, because I would then have to implement it. 😄

    Haha dont worry, just curious. Your answer is good!

  • I did check some of the references.

    What I dont understand is why you would perceive this content as more trustworthy if I didn't say it's AI.

    Nobody should trust blindly some anonymous comment on a forum. I have to check what the AI blurbs out but you can just gobble the comment of some stranger without exercising yourself some critical thinking?

    As long as I'm transparent on the source and especially since I did check some of it to be sure it's not some kind of hallucination...

    There shouldn't be any difference of trust between some random comment on a social network and what some AI model thinks on a subject.

    Also it's not like this is some important topic with societal implications. It's just a technical question that I had (and still doesn't) that doesn't mandate researching. None of my work depends on that lib. So before my comment there was no information on compatibility. Now there is but you have to look at it critically and decide if you want to verify or trust it.

    That's why I regret this kind of stubborn downvoting where people just assume the worse instead of checking the actual data.

    Sometime I really wonder if I'm the only one supposed to check my data? Aren't everybody here capable of verifying the AI output if they think it's worth the time and effort?

    Basically, downvoting here is choosing "no information" rather than "information I have to verify because it's AI generated".

    Edit: Also I could have just summarized the AI output myself and not mention AI. What then? Would you have checked the accuracy of that data? Critical thinking is not something you use "sometimes" or just "on some comments".

    You realize that if we wanted to see an AI LLM response, we'd ask an AI LLM ourselves.
    What you're doing is akin to :

    Hey guys, I've asked google if the new png is backward compatible, and here are the first links it gave me, hope this helps : [list 200 links]

  • Now if anyone don't mind explaining, PNG vs JXL?

    JXL is badly supported but it does offer lossless encoding in a more flexible and much more efficient way than png does

    Basically jxl could theoretically replace png, jpg, and also exr.

  • I don't know. If the poster couldn't be bothered to fact-check, why would I? It is just safer to assume that it can be misinformation.

    If you prefer to know nothing about PNG compatibility rather than something that might be true about PNG. That's fine but definitely not my approach.

    Also, as I said to another commenter. Critical thinking is not some tool you decide to use on some comments and not others. An AI answer on some topics is actually more likely to be correct than an answer by a human being. And it's not some stuff I was told by an AI guru it's what researchers are evaluating in many universities. Ask an human to complete various tasks and then ask the AI model and compare scientifically the data. And it turns out there is task where the AI outperforms the human pretty much all the time.

    YET on this particular task the assumption is that it's bullshit and it's just downvoted. Now I would have posted the same data myself and for some reason I would not see a single downvote. The same data represented differently completely change the likelihood of it being accurate. Even though at the end of the day you shouldn't trust blindly neither a comment from an human or an AI output.

    Honestly, I'm saddened to see people already rejecting completely the technology instead of trying to understand what it's good at and what it's bad at and most importantly experiencing it themselves.

    I wanted to know what was generative AI worth so I read about it and tried it locally with open source software. Now I know how to spot images that are AI generated, I know what's difficult for this tech and what is not. I think that's a much healthier attitude than blindly rejecting any and all AI outputs.

  • Ooh, that was the coaster company, I remember them.

  • That depends. Something like HDR should be able to fall back to non-HDR since it largely just adds data, so if the format specifies that extra information is ignored, there's a chance it works fine.

    I'm not sure you can turn an hdr image into a regular one just by snipping it down to 8 bits per channel and discarding the rest.

    I mean it would work but I'm not certain you'll get the best results.

  • JXL is badly supported but it does offer lossless encoding in a more flexible and much more efficient way than png does

    Basically jxl could theoretically replace png, jpg, and also exr.

    Interestingly, I downloaded GNOME's pride month wallpaper to see what it looked like, and the files were JXL. Never seen them in the wild before that

  • I'm not sure you can turn an hdr image into a regular one just by snipping it down to 8 bits per channel and discarding the rest.

    I mean it would work but I'm not certain you'll get the best results.

    it would work

    And that's probably enough. I don't know enough about HDR to know if it would look anything like the artist imagined, but as long as it's close enough, it's fine if it's not optimal. Having things completely break is far less than ideal.

  • it would work

    And that's probably enough. I don't know enough about HDR to know if it would look anything like the artist imagined, but as long as it's close enough, it's fine if it's not optimal. Having things completely break is far less than ideal.

    You'd probably get some colours that end up being quite off target. But you'll get an image to display. So in the end it depends on how much "not optimal" you're ready to accept.

  • You realize that if we wanted to see an AI LLM response, we'd ask an AI LLM ourselves.
    What you're doing is akin to :

    Hey guys, I've asked google if the new png is backward compatible, and here are the first links it gave me, hope this helps : [list 200 links]

    I understand that. It's the downvoting of the clearly marked as AI LLM response. Is it detrimental to the conversation here to have that? Is it better to share nothing rather than this LLM output?

    Was this thread better without it?

    Is complete ignorance of the PNG compatibility preferable to reading this AI output and pondering how true is it?

    [list 200 links]

    Now I think this conversation is getting just rude for no reason.
    I think the AI output was definitely not the "I'm lucky" result of a Google search and the fact that you choose that metaphor is in bad faith.

  • I understand that. It's the downvoting of the clearly marked as AI LLM response. Is it detrimental to the conversation here to have that? Is it better to share nothing rather than this LLM output?

    Was this thread better without it?

    Is complete ignorance of the PNG compatibility preferable to reading this AI output and pondering how true is it?

    [list 200 links]

    Now I think this conversation is getting just rude for no reason.
    I think the AI output was definitely not the "I'm lucky" result of a Google search and the fact that you choose that metaphor is in bad faith.

    Was this thread better without it?

    Yes.

    I, and I assume most people, go into the comments on Lemmy to interact with other people. If I wanted to fucking chit-chat with an LLM (why you'd want to do that, I can't fathom), I'd go do that. We all have access to LLMs if we wish to have bullshit with a veneer of eloquency spouted at us.

  • You'd probably get some colours that end up being quite off target. But you'll get an image to display. So in the end it depends on how much "not optimal" you're ready to accept.

    Right, and it depends on what "quite off target" means. Are we talking about greens becoming purples? Or dark greens becoming bright greens? If the image is still mostly recognizable, just with poor saturation or contrast or whatever, I think it's acceptable for older software.

  • I did check some of the references.

    What I dont understand is why you would perceive this content as more trustworthy if I didn't say it's AI.

    Nobody should trust blindly some anonymous comment on a forum. I have to check what the AI blurbs out but you can just gobble the comment of some stranger without exercising yourself some critical thinking?

    As long as I'm transparent on the source and especially since I did check some of it to be sure it's not some kind of hallucination...

    There shouldn't be any difference of trust between some random comment on a social network and what some AI model thinks on a subject.

    Also it's not like this is some important topic with societal implications. It's just a technical question that I had (and still doesn't) that doesn't mandate researching. None of my work depends on that lib. So before my comment there was no information on compatibility. Now there is but you have to look at it critically and decide if you want to verify or trust it.

    That's why I regret this kind of stubborn downvoting where people just assume the worse instead of checking the actual data.

    Sometime I really wonder if I'm the only one supposed to check my data? Aren't everybody here capable of verifying the AI output if they think it's worth the time and effort?

    Basically, downvoting here is choosing "no information" rather than "information I have to verify because it's AI generated".

    Edit: Also I could have just summarized the AI output myself and not mention AI. What then? Would you have checked the accuracy of that data? Critical thinking is not something you use "sometimes" or just "on some comments".

    Are you really asking why advertising that "the following comment may be hallucinated" nets you more downvotes than just omitting that fact?

    You're literally telling people "hey, this is a low effort comment" and acting flabbergasted that it gets you downvotes.

  • Goodbye gif hello png?

    Is it pronounced png or png?

  • Is it pronounced png or png?

    PNG, like "PNG"

  • I did check some of the references.

    What I dont understand is why you would perceive this content as more trustworthy if I didn't say it's AI.

    Nobody should trust blindly some anonymous comment on a forum. I have to check what the AI blurbs out but you can just gobble the comment of some stranger without exercising yourself some critical thinking?

    As long as I'm transparent on the source and especially since I did check some of it to be sure it's not some kind of hallucination...

    There shouldn't be any difference of trust between some random comment on a social network and what some AI model thinks on a subject.

    Also it's not like this is some important topic with societal implications. It's just a technical question that I had (and still doesn't) that doesn't mandate researching. None of my work depends on that lib. So before my comment there was no information on compatibility. Now there is but you have to look at it critically and decide if you want to verify or trust it.

    That's why I regret this kind of stubborn downvoting where people just assume the worse instead of checking the actual data.

    Sometime I really wonder if I'm the only one supposed to check my data? Aren't everybody here capable of verifying the AI output if they think it's worth the time and effort?

    Basically, downvoting here is choosing "no information" rather than "information I have to verify because it's AI generated".

    Edit: Also I could have just summarized the AI output myself and not mention AI. What then? Would you have checked the accuracy of that data? Critical thinking is not something you use "sometimes" or just "on some comments".

    Also it’s not like this is some important topic with societal implications. It’s just a technical question that I had (and still doesn’t) that doesn’t mandate researching.

    So why "research" it with AI in the first place, if you don't care about the results and don't even think it's worth researching? This is legitimately absurd to read.

  • This post did not contain any content.

    PNG PNG!

  • PNG, like "PNG"

    No no no, it's pronounced "PNG"

  • This post did not contain any content.

    How does this compare to nvidia JXR hdr screenshots ?

  • How to guide for MCP tools, resources, and prompts

    Technology technology
    1
    1
    8 Stimmen
    1 Beiträge
    3 Aufrufe
    Niemand hat geantwortet
  • FuckLAPD Let You Use Facial Recognition to Identify Cops.

    Technology technology
    11
    413 Stimmen
    11 Beiträge
    35 Aufrufe
    R
    China demoed tech that can recognize people based on the gait of their walk. Mask or not. This would be a really interesting topic if it wasn’t so scary.
  • 272 Stimmen
    77 Beiträge
    16 Aufrufe
    S
    I don't believe the idea of aggregating information is bad, moreso the ability to properly vet your sources yourself. I don't know what sources an AI chatbot could be pulling from. It could be a lot of sources, or it could be one source. Does it know which sources are reliable? Not really. AI has been infamous for hallucinating even with simple prompts. Being able to independently check where your info comes from is an important part of stopping the spread of misinfo. AI can't do that, and, in it's current state, I wouldn't want it to try. Convenience is a rat race of cutting corners. What is convenient isn't always what is best in the long run.
  • 136 Stimmen
    9 Beiträge
    22 Aufrufe
    C
    So is there a way to fill my social media with endless markov chains without: Spamming other users. Just sticking them all in some dedicated channel that would allow them to be easily filtered out.
  • 311 Stimmen
    37 Beiträge
    20 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • Browser Alternatives to Chrome

    Technology technology
    14
    11 Stimmen
    14 Beiträge
    7 Aufrufe
    L
    I've been using Vivaldi as my logged in browser for years. I like the double tab bar groups, session management, email client, sidebar and tab bar on mobile. It is strange to me that tab bar isn't a thing on mobile on other browsers despite phones having way more vertical space than computers. Although for internet searches I use a seperate lighter weight browser that clears its data on close. Ecosia also been using for years. For a while it was geniunely better than the other search engines I had tried but nowadays it's worse since it started to return google translate webpage translation links based on search region instead of the webpages themselves. Also not sure what to think about the counter they readded after removing it to reduce the emphasis on quantity over quality like a year ago. I don't use duckduckgo as its name and the way privacy communities used to obsess about it made me distrust it for some reason
  • 377 Stimmen
    58 Beiträge
    24 Aufrufe
    avidamoeba@lemmy.caA
    Does anyone know if there's additional sandboxing of local ports happening for apps running in Private Space? E: Checked myself. Can access servers in Private Space from non-Private Space browsers and vice versa. So Facebook installed in Private Space is no bueno. Even if the time to transfer data is limited since Private Space is running for short periods of time, it's likely enough to pass a token while browsing some sites.
  • *deleted by creator*

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    7 Aufrufe
    Niemand hat geantwortet