Skip to content

Google is going ‘all in’ on AI. It’s part of a troubling trend in big tech

Technology
119 72 189
  • AI doesn't say no, AI doesn't fight back

    well going by what ive heard about the latest LLM models freaking out when being forced to do things contrary to its original instructions (like grok constantly talking about white genocide) ai isn't as obedient as they would prefer

  • This post did not contain any content.

    Millions of businesses are so innovative they are choosing the same basket to put all their eggs in.

    Capitalism sure is fun. Simply side economics plus massive deregulation is sure to provide humanity with it's salvation.

  • The rich are cashing in our tax dollars to try to automate their control of an enslaved human race.

    They will do anything besides just pay taxes and contribute to society

    It's not even that

    tech is under the helm of dipshit MBAs who have no idea of the technologies of the companies they control. They're all about the generative AI because it looks like a massive shortcut to compensate for their complete and utter lack of technical ability and talent.

  • This post did not contain any content.

    It's crazy Google will lose its search dominance and all its money in my lifetime. Android will probably be the only thing left when I die.

  • This post did not contain any content.

    The last 20 years has basically been entirely a troubling trend in tech.

  • Google has gotten so fucking dumb. Literally incapable of performing the same function it could 4 months ago.

    How the fuck am I supposed to trust Gemini!?

    google search got dumb on purpose, a whistleblower called it out - if you spend longer look on the search pages they get more "engagement" time out of you....

  • What are you talking about “temporal+quality” for DLSS? That’s not a thing.

    DLSS I’m talking about. There are many comparisons out there showing how amazing it is, often resulting in better IQ than native.

    FXAA is not an AI upscaler, what are you talking about?

    What are you talking about “temporal+quality” for DLSS? That’s not a thing.

    Sorry I was mistaken, it's not "temporal", I meant "transformer", as in the "transformer model", as here in CP2077.

    DLSS I’m talking about. There are many comparisons out there showing how amazing it is, often resulting in better IQ than native.

    Let me explain:

    No, AI upscaling from a lower resolution will never be better than just running the game at the native resolution it's being upscaled to.

    By it's very nature, the ML model is just "guessing" what the frame might look like if it was rendered at native resolution. It's not an accurate representation of the render output or artistic intent. Is it impressive? Yes of course, it's a miracle of technology and a result of brilliant engineering and research in the ML field applied creatively and practically in real time computer graphics, but it does not result in a better image than native, nor does it aim to do so.

    It's mainly there to increase performance when rendering at native resolution is too computationally expensive and results in poor performance, while minimizing the loss in detail. It may do a good job of it for sure, relatively speaking, but it can never match an actual native image, and compressed YouTube video with bitrates less than a DVD aren't a good reference point because they don't represent anything even close to what a real render looks like, and not a compressed motion jpeg of it.

    Even if it seems like there's "added detail", any "added detail" is either literally just an illusion stemming from the sharpening post-processing filter akin to the "added detail" of a cheap Walmart "HD Ready" TV circa 2007 with sharpening cranked up, or outright fictional, and does not exist within the game files itself, and if by "better" we agree that it's the most high fidelity representation of the game as it exists on disk, then AI cannot ever be better.

    FXAA is not an AI upscaler, what are you talking about?

    I mention FXAA because really the only reason we use "AI upscalers" is because anti-aliasing is really really computationally expensive.

    The single most immediately evident and obvious consequence of a low render resolution is aliasing first and foremost. Almost all other aspects of a game's graphics are usually completely detached from this like e.g. texture resolution.

    The reason aliasing happens in the first place is because our ability to create, ship, process and render increasingly high polygon count games has massive surpassed our ability to push pixels on screen in real time.

    Or course legibility suffers at lower resolution as well, but not nearly as much as smoothness of edges on high-polygon objects.

    So for assets that would look really good at say, 4K, we run them at 720p instead, and this creates jagged edges because we literally cannot make the thing fit into the pixels we're pushing.

    The best and most direct solution will always be just to render the game at a much higher resolution. But that kills framerates.

    We can't do that, so we resort to Anti-Aliasing techniques instead. The most simple of which is MSAA which just multi-samples (renders at higher res) those edges and downscales them.

    But it's also very very expensive to do computationally. GPUs capable of doing it alongside other bells and whistles we have like Ray Tracing simply don't exist, and if they did they'd cost too much, and even then, most games have to target consoles, which are solidly beat out by a flagship GPU even from several years ago.

    One other solution is to blur these jagged edges out, sacrificing detail for a "smooth" look.

    This is what FXAA does, but this creates a blurry image. This became very prevalent during the 7th Gen console era in particular because they simply couldn't push more than 720p in most games, in an era where Full HD TVs had become fairly common towards the end and shiny, polished graphics in trailers became a major way to make sales, this was further worsened by the fact Motion Blur was often used to cover up low framerates and replicate the look of sleek modern (at the time) digital blockbusters.

    SMAA fixed some of FXAA's issues by being more selective about which pixels were blurred, and TAA eliminated the shimmering effect by also taking into account which pixels should be blurred across multiple frames.

    Beyond this there are other tricks, like checkerboard rendering, where we render the frame in chunks at different resolutions based on what the player may or may not be looking at.

    In VR we also use foveated rendering to render an FOV cone in front of the players immediate vision at a higher res than what would be in their periphery/outside the eye's natural focus, with eye tracking tech, this actually works really well.

    But none of these are very good solutions, so we resort to another ugly, but potentially less bad solution, which is just rendering the game at a lower resolution and upscaling it, like a DVD played on an HDTV, but instead of a traditional upscaling algo like Lanczoz, we use DLSS, which reconstructs detail lost from a lower resolution render, based on context of the frame using machine learning, which is efficient because of tensor cores now included on every GPU making N-dimensional array multiplication and mixed precision FP math relatively computationally cheap.

    DLSS often looks better compared to FXAA, SMAA and TAA because all of those just literally blur the image in different ways, without any detail reconstruction, but it is not comparable to any real anti-aliasing technique like MSAA.

    But DLSS always renders at a lower res than native, so it will never be 1:1 a true native image, it's just an upscale. That's okay, because that's not the point. The purpose of DLSS isn't to boost quality, it's to be a crutch for low performance, it's why turning off even Quality presets for DLSS will often tank performance.

    There is one situation where DLSS can look better than native, and it's if you instead of typical applications of DLSS which downscales the image, then upscales it with ML guesswork, use it to upscale the image from native, to a higher target res instead and output that.

    In Nvidia settings I believe this is called DL DSR factors.

  • What are you talking about “temporal+quality” for DLSS? That’s not a thing.

    Sorry I was mistaken, it's not "temporal", I meant "transformer", as in the "transformer model", as here in CP2077.

    DLSS I’m talking about. There are many comparisons out there showing how amazing it is, often resulting in better IQ than native.

    Let me explain:

    No, AI upscaling from a lower resolution will never be better than just running the game at the native resolution it's being upscaled to.

    By it's very nature, the ML model is just "guessing" what the frame might look like if it was rendered at native resolution. It's not an accurate representation of the render output or artistic intent. Is it impressive? Yes of course, it's a miracle of technology and a result of brilliant engineering and research in the ML field applied creatively and practically in real time computer graphics, but it does not result in a better image than native, nor does it aim to do so.

    It's mainly there to increase performance when rendering at native resolution is too computationally expensive and results in poor performance, while minimizing the loss in detail. It may do a good job of it for sure, relatively speaking, but it can never match an actual native image, and compressed YouTube video with bitrates less than a DVD aren't a good reference point because they don't represent anything even close to what a real render looks like, and not a compressed motion jpeg of it.

    Even if it seems like there's "added detail", any "added detail" is either literally just an illusion stemming from the sharpening post-processing filter akin to the "added detail" of a cheap Walmart "HD Ready" TV circa 2007 with sharpening cranked up, or outright fictional, and does not exist within the game files itself, and if by "better" we agree that it's the most high fidelity representation of the game as it exists on disk, then AI cannot ever be better.

    FXAA is not an AI upscaler, what are you talking about?

    I mention FXAA because really the only reason we use "AI upscalers" is because anti-aliasing is really really computationally expensive.

    The single most immediately evident and obvious consequence of a low render resolution is aliasing first and foremost. Almost all other aspects of a game's graphics are usually completely detached from this like e.g. texture resolution.

    The reason aliasing happens in the first place is because our ability to create, ship, process and render increasingly high polygon count games has massive surpassed our ability to push pixels on screen in real time.

    Or course legibility suffers at lower resolution as well, but not nearly as much as smoothness of edges on high-polygon objects.

    So for assets that would look really good at say, 4K, we run them at 720p instead, and this creates jagged edges because we literally cannot make the thing fit into the pixels we're pushing.

    The best and most direct solution will always be just to render the game at a much higher resolution. But that kills framerates.

    We can't do that, so we resort to Anti-Aliasing techniques instead. The most simple of which is MSAA which just multi-samples (renders at higher res) those edges and downscales them.

    But it's also very very expensive to do computationally. GPUs capable of doing it alongside other bells and whistles we have like Ray Tracing simply don't exist, and if they did they'd cost too much, and even then, most games have to target consoles, which are solidly beat out by a flagship GPU even from several years ago.

    One other solution is to blur these jagged edges out, sacrificing detail for a "smooth" look.

    This is what FXAA does, but this creates a blurry image. This became very prevalent during the 7th Gen console era in particular because they simply couldn't push more than 720p in most games, in an era where Full HD TVs had become fairly common towards the end and shiny, polished graphics in trailers became a major way to make sales, this was further worsened by the fact Motion Blur was often used to cover up low framerates and replicate the look of sleek modern (at the time) digital blockbusters.

    SMAA fixed some of FXAA's issues by being more selective about which pixels were blurred, and TAA eliminated the shimmering effect by also taking into account which pixels should be blurred across multiple frames.

    Beyond this there are other tricks, like checkerboard rendering, where we render the frame in chunks at different resolutions based on what the player may or may not be looking at.

    In VR we also use foveated rendering to render an FOV cone in front of the players immediate vision at a higher res than what would be in their periphery/outside the eye's natural focus, with eye tracking tech, this actually works really well.

    But none of these are very good solutions, so we resort to another ugly, but potentially less bad solution, which is just rendering the game at a lower resolution and upscaling it, like a DVD played on an HDTV, but instead of a traditional upscaling algo like Lanczoz, we use DLSS, which reconstructs detail lost from a lower resolution render, based on context of the frame using machine learning, which is efficient because of tensor cores now included on every GPU making N-dimensional array multiplication and mixed precision FP math relatively computationally cheap.

    DLSS often looks better compared to FXAA, SMAA and TAA because all of those just literally blur the image in different ways, without any detail reconstruction, but it is not comparable to any real anti-aliasing technique like MSAA.

    But DLSS always renders at a lower res than native, so it will never be 1:1 a true native image, it's just an upscale. That's okay, because that's not the point. The purpose of DLSS isn't to boost quality, it's to be a crutch for low performance, it's why turning off even Quality presets for DLSS will often tank performance.

    There is one situation where DLSS can look better than native, and it's if you instead of typical applications of DLSS which downscales the image, then upscales it with ML guesswork, use it to upscale the image from native, to a higher target res instead and output that.

    In Nvidia settings I believe this is called DL DSR factors.

    I don’t even know where to begin, so much wrong with this. I’ll have to come back when I’ve got more time.

  • True, in a broad sense. I am speaking moreso to enshittification and the degradation of both experience and control.

    If this was just "now everything has Siri, it's private and it works 100x better than before" it would be amazing. That would be like cars vs horses. A change, but a perceived value and advantage.

    But it's not. Not right now anyways. Right now it's like replacing a car with a pod that runs on direct wind. If there is any wind over say, 3mph it works, and steers 95% as well as existing cars. But 5% of the time it's uncontrollable and the steering or brakes won't respond. And when there is no wind over 3mph it just doesn't work.

    In this hypothetical, the product is a clear innovation, offers potential benefits long term in terms of emissions and fuel, but it doesn't do the core task well, and sometimes it just fucks it up.

    The television, cars, social media, all fulfilled a very real niche. But nearly everyone using AI, even those using it as a tool for coding (arguably its best use case) often don't want to use it in search or in many of these other "forced" applications because of how unreliable it is. Hence why companies have tried (and failed at great expense) to replace their customer service teams with LLMs.

    This push is much more top down.

    Now drink your New Coke and Crystal Pepsi.

    In the beginning though many I’ve ruins didn’t fill much of a purpose. When TV was invented maybe a handful of programs were available. People still had more use for radio. Slowly it became what it is today.

    I get it though. The middle phase sucks because everybody is money hungry. Eventually things will fall into place.

  • This post did not contain any content.

    It’s the Wild West days of AI, just like the internet in the 90s. Do what you can with it now, because it’ll eventually turn into a marketing platform. You’ll get a handy free AI model that occasionally tries to convince you to buy stuff. The paid premium models will start doing it too.

  • There are some outlandish rumours that it's possible for a device to have... both Bluetooth and a headphone jack.

    Impossible! It's never been done!

  • It's crazy Google will lose its search dominance and all its money in my lifetime. Android will probably be the only thing left when I die.

    Not even sure about that though. There are many ideas already to "revolutionize" the OS market where your device basically becomes a sole wrapper for AI, ditching the concept of apps etc. I assume it would center around some agentic bullshit or so.

  • Google has gotten so fucking dumb. Literally incapable of performing the same function it could 4 months ago.

    How the fuck am I supposed to trust Gemini!?

    I was fucking irked when I wanted to use Hey Google to add something to my grocery list. I had switched to Gemini not realizing its scope, and suddenly Gemini was needing voice permission then some other seemingly unrelated, unnecessary permission (can't recall exactly but something like collaborative documents) to add to my grocery list. Fuck that. Then it seemed very difficult to find the setting to switch back to Google assistant, but I eventually found it.

  • Rich people at tech companies replace workers with AI, set up a security force that goes after immigrants, surveil the city with a camera network, try to remove the human from the equation, try to upload human consciousness to the cloud, lots of other AI tech dystopian stuff.

    That's when a group of underground hackers start recruiting random people off the street like Granny and generic construction worker 12, and take the fight back to them!

    ....right?

  • Remember that you, the reader, don't have to take part in this. If you don't like it, don't use it - tell your friends and family not to use it, and why.

    The only way companies stop this trend is if they see it's a losing bet.

    Oh they'll force you to use it. It will be shoved into every service you use, also ones you need to use. You will not be able to do your work, access government services, or live your life without going through them.

    Late stage capitalism has killed the free market a while ago.

  • There are some outlandish rumours that it's possible for a device to have... both Bluetooth and a headphone jack.

    My previous phone was like that. And had a better DAC that some of the cheaper converters.

  • Oh they'll force you to use it. It will be shoved into every service you use, also ones you need to use. You will not be able to do your work, access government services, or live your life without going through them.

    Late stage capitalism has killed the free market a while ago.

    Use at work is a secondary factor. If end stage customers refuse to use a service because of a certain trait, that trait becomes unprofitable.

    As an example, my friends and I will never play Valorant because of the invasive anti-cheat system; most people don't care.

    We all have a choice, even if it means giving up some conveniences. It would seem that most people either don't know or don't know better.

  • I don’t even know where to begin, so much wrong with this. I’ll have to come back when I’ve got more time.

    Okay, I'd be interested to hear what you think is wrong with this, because I'm pretty sure it's more or less correct.

    Some sources for you to help you understand these concepts a bit better:

    What DLSS is and how it works as a starter: https://en.wikipedia.org/wiki/Deep_Learning_Super_Sampling

    Issues with modern "optimization", including DLSS: https://www.youtube.com/watch?v=lJu_DgCHfx4

    TAA comparisons (yes, biased, but accurate): https://old.reddit.com/r/FuckTAA/comments/1e7ozv0/rfucktaa_resource/

  • How to turn off Gemini on Android — and why you should

    Technology technology
    45
    1
    403 Stimmen
    45 Beiträge
    286 Aufrufe
    K
    Galaxy S23, sold many units and isn't a last gen device
  • The U.S. Immigration and Customs

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 149 Stimmen
    78 Beiträge
    271 Aufrufe
    fizz@lemmy.nzF
    If AI gave you an accurate correct answer 99% of the time would you use it to find the answer to questions quickly? I would. I absolutely would, the natural language search of ai feels amazing for finding the answer to a question you have. The current problem is that its not accurate and not correct at a high enough percentage. As soon as that reaches a certain point we're cooked and AI becomes undeniable.
  • The hidden cost of Georgia’s online casino boom

    Technology technology
    1
    18 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 1 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 71 Stimmen
    9 Beiträge
    53 Aufrufe
    M
    Mr President, could you describe supersonic flight? (said with the emotion of "for all us dumbasses") Oh man there's going to be a barrier, but it's invisible, but it's the greatest barrier man has ever known. I gotta stop
  • 154 Stimmen
    137 Beiträge
    78 Aufrufe
    brewchin@lemmy.worldB
    If you're after text, there are a number of options. If you're after group voice, there are a number of options. You could mix and match both, but "where everyone else is" will also likely be a factor in that kind of decision. If you want both together, then there's probably just Element (Matrix + voice)? Not sure of other options that aren't centralised, where you're the product, or otherwise at obvious risk of enshittifying. (And Element has the smell of the latter to me, but that's another topic). I've prepared for Discord's inevitable "final straw" moment by setting up a Matrix room and maintaining a self-hosted Mumble server in Docker for my gaming buddies. It's worked when Discord has been down, so I know it works. Yet to convince them to test Element...
  • 92 Stimmen
    42 Beiträge
    51 Aufrufe
    G
    You don’t understand. The tracking and spying is the entire point of the maneuver. The ‘children are accessing porn’ thing is just a Trojan horse to justify the spying. I understand what are you saying, I simply don't consider to check if a law is applied as a Trojan horse in itself. I would agree if the EU had said to these sites "give us all the the access log, a list of your subscriber, every data you gather and a list of every IP it ever connected to your site", and even this way does not imply that with only the IP you could know who the user is without even asking the telecom company for help. So, is it a Trojan horse ? Maybe, it heavily depend on how the EU want to do it. If they just ask "show me how you try to avoid that a minor access your material", which normally is the fist step, I don't see how it could be a Trojan horse. It could become, I agree on that. As you pointed out, it’s already illegal for them to access it, and parents are legally required to prevent their children from accessing it. No, parents are not legally required to prevent it. The seller (or provider) is legally required. It is a subtle but important difference. But you don’t lock down the entire population, or institute pre-crime surveillance policies, just because some parents are not going to follow the law. True. You simply impose laws that make mandatories for the provider to check if he can sell/serve something to someone. I mean asking that the cashier of mall check if I am an adult when I buy a bottle of wine is no different than asking to Pornhub to check if the viewer is an adult. I agree that in one case is really simple and in the other is really hard (and it is becoming harder by the day). You then charge the guilty parents after the offense. Ok, it would work, but then how do you caught the offendind parents if not checking what everyone do ? Is it not simpler to try to prevent it instead ?