Skip to content

Google is going ‘all in’ on AI. It’s part of a troubling trend in big tech

Technology
119 72 210
  • Google has gotten so fucking dumb. Literally incapable of performing the same function it could 4 months ago.

    How the fuck am I supposed to trust Gemini!?

    google search got dumb on purpose, a whistleblower called it out - if you spend longer look on the search pages they get more "engagement" time out of you....

  • What are you talking about “temporal+quality” for DLSS? That’s not a thing.

    DLSS I’m talking about. There are many comparisons out there showing how amazing it is, often resulting in better IQ than native.

    FXAA is not an AI upscaler, what are you talking about?

    What are you talking about “temporal+quality” for DLSS? That’s not a thing.

    Sorry I was mistaken, it's not "temporal", I meant "transformer", as in the "transformer model", as here in CP2077.

    DLSS I’m talking about. There are many comparisons out there showing how amazing it is, often resulting in better IQ than native.

    Let me explain:

    No, AI upscaling from a lower resolution will never be better than just running the game at the native resolution it's being upscaled to.

    By it's very nature, the ML model is just "guessing" what the frame might look like if it was rendered at native resolution. It's not an accurate representation of the render output or artistic intent. Is it impressive? Yes of course, it's a miracle of technology and a result of brilliant engineering and research in the ML field applied creatively and practically in real time computer graphics, but it does not result in a better image than native, nor does it aim to do so.

    It's mainly there to increase performance when rendering at native resolution is too computationally expensive and results in poor performance, while minimizing the loss in detail. It may do a good job of it for sure, relatively speaking, but it can never match an actual native image, and compressed YouTube video with bitrates less than a DVD aren't a good reference point because they don't represent anything even close to what a real render looks like, and not a compressed motion jpeg of it.

    Even if it seems like there's "added detail", any "added detail" is either literally just an illusion stemming from the sharpening post-processing filter akin to the "added detail" of a cheap Walmart "HD Ready" TV circa 2007 with sharpening cranked up, or outright fictional, and does not exist within the game files itself, and if by "better" we agree that it's the most high fidelity representation of the game as it exists on disk, then AI cannot ever be better.

    FXAA is not an AI upscaler, what are you talking about?

    I mention FXAA because really the only reason we use "AI upscalers" is because anti-aliasing is really really computationally expensive.

    The single most immediately evident and obvious consequence of a low render resolution is aliasing first and foremost. Almost all other aspects of a game's graphics are usually completely detached from this like e.g. texture resolution.

    The reason aliasing happens in the first place is because our ability to create, ship, process and render increasingly high polygon count games has massive surpassed our ability to push pixels on screen in real time.

    Or course legibility suffers at lower resolution as well, but not nearly as much as smoothness of edges on high-polygon objects.

    So for assets that would look really good at say, 4K, we run them at 720p instead, and this creates jagged edges because we literally cannot make the thing fit into the pixels we're pushing.

    The best and most direct solution will always be just to render the game at a much higher resolution. But that kills framerates.

    We can't do that, so we resort to Anti-Aliasing techniques instead. The most simple of which is MSAA which just multi-samples (renders at higher res) those edges and downscales them.

    But it's also very very expensive to do computationally. GPUs capable of doing it alongside other bells and whistles we have like Ray Tracing simply don't exist, and if they did they'd cost too much, and even then, most games have to target consoles, which are solidly beat out by a flagship GPU even from several years ago.

    One other solution is to blur these jagged edges out, sacrificing detail for a "smooth" look.

    This is what FXAA does, but this creates a blurry image. This became very prevalent during the 7th Gen console era in particular because they simply couldn't push more than 720p in most games, in an era where Full HD TVs had become fairly common towards the end and shiny, polished graphics in trailers became a major way to make sales, this was further worsened by the fact Motion Blur was often used to cover up low framerates and replicate the look of sleek modern (at the time) digital blockbusters.

    SMAA fixed some of FXAA's issues by being more selective about which pixels were blurred, and TAA eliminated the shimmering effect by also taking into account which pixels should be blurred across multiple frames.

    Beyond this there are other tricks, like checkerboard rendering, where we render the frame in chunks at different resolutions based on what the player may or may not be looking at.

    In VR we also use foveated rendering to render an FOV cone in front of the players immediate vision at a higher res than what would be in their periphery/outside the eye's natural focus, with eye tracking tech, this actually works really well.

    But none of these are very good solutions, so we resort to another ugly, but potentially less bad solution, which is just rendering the game at a lower resolution and upscaling it, like a DVD played on an HDTV, but instead of a traditional upscaling algo like Lanczoz, we use DLSS, which reconstructs detail lost from a lower resolution render, based on context of the frame using machine learning, which is efficient because of tensor cores now included on every GPU making N-dimensional array multiplication and mixed precision FP math relatively computationally cheap.

    DLSS often looks better compared to FXAA, SMAA and TAA because all of those just literally blur the image in different ways, without any detail reconstruction, but it is not comparable to any real anti-aliasing technique like MSAA.

    But DLSS always renders at a lower res than native, so it will never be 1:1 a true native image, it's just an upscale. That's okay, because that's not the point. The purpose of DLSS isn't to boost quality, it's to be a crutch for low performance, it's why turning off even Quality presets for DLSS will often tank performance.

    There is one situation where DLSS can look better than native, and it's if you instead of typical applications of DLSS which downscales the image, then upscales it with ML guesswork, use it to upscale the image from native, to a higher target res instead and output that.

    In Nvidia settings I believe this is called DL DSR factors.

  • What are you talking about “temporal+quality” for DLSS? That’s not a thing.

    Sorry I was mistaken, it's not "temporal", I meant "transformer", as in the "transformer model", as here in CP2077.

    DLSS I’m talking about. There are many comparisons out there showing how amazing it is, often resulting in better IQ than native.

    Let me explain:

    No, AI upscaling from a lower resolution will never be better than just running the game at the native resolution it's being upscaled to.

    By it's very nature, the ML model is just "guessing" what the frame might look like if it was rendered at native resolution. It's not an accurate representation of the render output or artistic intent. Is it impressive? Yes of course, it's a miracle of technology and a result of brilliant engineering and research in the ML field applied creatively and practically in real time computer graphics, but it does not result in a better image than native, nor does it aim to do so.

    It's mainly there to increase performance when rendering at native resolution is too computationally expensive and results in poor performance, while minimizing the loss in detail. It may do a good job of it for sure, relatively speaking, but it can never match an actual native image, and compressed YouTube video with bitrates less than a DVD aren't a good reference point because they don't represent anything even close to what a real render looks like, and not a compressed motion jpeg of it.

    Even if it seems like there's "added detail", any "added detail" is either literally just an illusion stemming from the sharpening post-processing filter akin to the "added detail" of a cheap Walmart "HD Ready" TV circa 2007 with sharpening cranked up, or outright fictional, and does not exist within the game files itself, and if by "better" we agree that it's the most high fidelity representation of the game as it exists on disk, then AI cannot ever be better.

    FXAA is not an AI upscaler, what are you talking about?

    I mention FXAA because really the only reason we use "AI upscalers" is because anti-aliasing is really really computationally expensive.

    The single most immediately evident and obvious consequence of a low render resolution is aliasing first and foremost. Almost all other aspects of a game's graphics are usually completely detached from this like e.g. texture resolution.

    The reason aliasing happens in the first place is because our ability to create, ship, process and render increasingly high polygon count games has massive surpassed our ability to push pixels on screen in real time.

    Or course legibility suffers at lower resolution as well, but not nearly as much as smoothness of edges on high-polygon objects.

    So for assets that would look really good at say, 4K, we run them at 720p instead, and this creates jagged edges because we literally cannot make the thing fit into the pixels we're pushing.

    The best and most direct solution will always be just to render the game at a much higher resolution. But that kills framerates.

    We can't do that, so we resort to Anti-Aliasing techniques instead. The most simple of which is MSAA which just multi-samples (renders at higher res) those edges and downscales them.

    But it's also very very expensive to do computationally. GPUs capable of doing it alongside other bells and whistles we have like Ray Tracing simply don't exist, and if they did they'd cost too much, and even then, most games have to target consoles, which are solidly beat out by a flagship GPU even from several years ago.

    One other solution is to blur these jagged edges out, sacrificing detail for a "smooth" look.

    This is what FXAA does, but this creates a blurry image. This became very prevalent during the 7th Gen console era in particular because they simply couldn't push more than 720p in most games, in an era where Full HD TVs had become fairly common towards the end and shiny, polished graphics in trailers became a major way to make sales, this was further worsened by the fact Motion Blur was often used to cover up low framerates and replicate the look of sleek modern (at the time) digital blockbusters.

    SMAA fixed some of FXAA's issues by being more selective about which pixels were blurred, and TAA eliminated the shimmering effect by also taking into account which pixels should be blurred across multiple frames.

    Beyond this there are other tricks, like checkerboard rendering, where we render the frame in chunks at different resolutions based on what the player may or may not be looking at.

    In VR we also use foveated rendering to render an FOV cone in front of the players immediate vision at a higher res than what would be in their periphery/outside the eye's natural focus, with eye tracking tech, this actually works really well.

    But none of these are very good solutions, so we resort to another ugly, but potentially less bad solution, which is just rendering the game at a lower resolution and upscaling it, like a DVD played on an HDTV, but instead of a traditional upscaling algo like Lanczoz, we use DLSS, which reconstructs detail lost from a lower resolution render, based on context of the frame using machine learning, which is efficient because of tensor cores now included on every GPU making N-dimensional array multiplication and mixed precision FP math relatively computationally cheap.

    DLSS often looks better compared to FXAA, SMAA and TAA because all of those just literally blur the image in different ways, without any detail reconstruction, but it is not comparable to any real anti-aliasing technique like MSAA.

    But DLSS always renders at a lower res than native, so it will never be 1:1 a true native image, it's just an upscale. That's okay, because that's not the point. The purpose of DLSS isn't to boost quality, it's to be a crutch for low performance, it's why turning off even Quality presets for DLSS will often tank performance.

    There is one situation where DLSS can look better than native, and it's if you instead of typical applications of DLSS which downscales the image, then upscales it with ML guesswork, use it to upscale the image from native, to a higher target res instead and output that.

    In Nvidia settings I believe this is called DL DSR factors.

    I don’t even know where to begin, so much wrong with this. I’ll have to come back when I’ve got more time.

  • True, in a broad sense. I am speaking moreso to enshittification and the degradation of both experience and control.

    If this was just "now everything has Siri, it's private and it works 100x better than before" it would be amazing. That would be like cars vs horses. A change, but a perceived value and advantage.

    But it's not. Not right now anyways. Right now it's like replacing a car with a pod that runs on direct wind. If there is any wind over say, 3mph it works, and steers 95% as well as existing cars. But 5% of the time it's uncontrollable and the steering or brakes won't respond. And when there is no wind over 3mph it just doesn't work.

    In this hypothetical, the product is a clear innovation, offers potential benefits long term in terms of emissions and fuel, but it doesn't do the core task well, and sometimes it just fucks it up.

    The television, cars, social media, all fulfilled a very real niche. But nearly everyone using AI, even those using it as a tool for coding (arguably its best use case) often don't want to use it in search or in many of these other "forced" applications because of how unreliable it is. Hence why companies have tried (and failed at great expense) to replace their customer service teams with LLMs.

    This push is much more top down.

    Now drink your New Coke and Crystal Pepsi.

    In the beginning though many I’ve ruins didn’t fill much of a purpose. When TV was invented maybe a handful of programs were available. People still had more use for radio. Slowly it became what it is today.

    I get it though. The middle phase sucks because everybody is money hungry. Eventually things will fall into place.

  • This post did not contain any content.

    It’s the Wild West days of AI, just like the internet in the 90s. Do what you can with it now, because it’ll eventually turn into a marketing platform. You’ll get a handy free AI model that occasionally tries to convince you to buy stuff. The paid premium models will start doing it too.

  • There are some outlandish rumours that it's possible for a device to have... both Bluetooth and a headphone jack.

    Impossible! It's never been done!

  • It's crazy Google will lose its search dominance and all its money in my lifetime. Android will probably be the only thing left when I die.

    Not even sure about that though. There are many ideas already to "revolutionize" the OS market where your device basically becomes a sole wrapper for AI, ditching the concept of apps etc. I assume it would center around some agentic bullshit or so.

  • Google has gotten so fucking dumb. Literally incapable of performing the same function it could 4 months ago.

    How the fuck am I supposed to trust Gemini!?

    I was fucking irked when I wanted to use Hey Google to add something to my grocery list. I had switched to Gemini not realizing its scope, and suddenly Gemini was needing voice permission then some other seemingly unrelated, unnecessary permission (can't recall exactly but something like collaborative documents) to add to my grocery list. Fuck that. Then it seemed very difficult to find the setting to switch back to Google assistant, but I eventually found it.

  • Rich people at tech companies replace workers with AI, set up a security force that goes after immigrants, surveil the city with a camera network, try to remove the human from the equation, try to upload human consciousness to the cloud, lots of other AI tech dystopian stuff.

    That's when a group of underground hackers start recruiting random people off the street like Granny and generic construction worker 12, and take the fight back to them!

    ....right?

  • Remember that you, the reader, don't have to take part in this. If you don't like it, don't use it - tell your friends and family not to use it, and why.

    The only way companies stop this trend is if they see it's a losing bet.

    Oh they'll force you to use it. It will be shoved into every service you use, also ones you need to use. You will not be able to do your work, access government services, or live your life without going through them.

    Late stage capitalism has killed the free market a while ago.

  • There are some outlandish rumours that it's possible for a device to have... both Bluetooth and a headphone jack.

    My previous phone was like that. And had a better DAC that some of the cheaper converters.

  • Oh they'll force you to use it. It will be shoved into every service you use, also ones you need to use. You will not be able to do your work, access government services, or live your life without going through them.

    Late stage capitalism has killed the free market a while ago.

    Use at work is a secondary factor. If end stage customers refuse to use a service because of a certain trait, that trait becomes unprofitable.

    As an example, my friends and I will never play Valorant because of the invasive anti-cheat system; most people don't care.

    We all have a choice, even if it means giving up some conveniences. It would seem that most people either don't know or don't know better.

  • I don’t even know where to begin, so much wrong with this. I’ll have to come back when I’ve got more time.

    Okay, I'd be interested to hear what you think is wrong with this, because I'm pretty sure it's more or less correct.

    Some sources for you to help you understand these concepts a bit better:

    What DLSS is and how it works as a starter: https://en.wikipedia.org/wiki/Deep_Learning_Super_Sampling

    Issues with modern "optimization", including DLSS: https://www.youtube.com/watch?v=lJu_DgCHfx4

    TAA comparisons (yes, biased, but accurate): https://old.reddit.com/r/FuckTAA/comments/1e7ozv0/rfucktaa_resource/

  • We're Not Innovating, We’re Just Forgetting Slower

    Technology technology
    37
    1
    278 Stimmen
    37 Beiträge
    0 Aufrufe
    R
    The author’s take is detached from reality, filled with hypocrisy and gatekeeping. "Opinionated" is another term - for friendliness and neutrality. Complaining about reality means a degree of detachment from it by intention. When was the last time, Mr author, you had to replace a failed DIMM in your modern computer? When was the last time, Mr commenter, you had to make your own furniture because it's harder to find a thing of the right dimensions to buy? But when that was more common, it was also easier to get the materials and the tools, because ordering things over the Internet and getting them delivered the next day was less common. In terms of managing my home I feel that 00s were nicer than now. Were the centralized "silk road" of today with TSMC kicked out (a nuke, suppose, or a political change), would you prefer less efficient yet more distributed production of electronics? That would have less allowance for various things hidden from users, that happen in modern RAM. Possibly much less. If there was no technological or production cost improvement, we’d just use the old version. I think their point was that there's no architectural innovation in some things. Yes, there is a regular shift in computing philosophy, but this is driving by new technologies and usually computing performance descending to be accessibly at commodity pricing. The Raspberry Pi wasn’t a revolutionary fast computer, but it changed the world because it was enough computing power and it was dirt cheap. Maybe those shifts are in market philosophies in tech. I agree, there is something appealing about it to you and me, but most people don’t care…and thats okay! To them its a tool to get something done. They are not in love with the tool, nor do they need to be. There's a screwdriver. I can imagine there's a fitting basic amount of attention a piece of knowledge gets. I can imagine some person not knowing how to use a screwdriver (substitute with something better) is below that. And some are far above that, maybe. I think the majority of humans is below the level of knowledge computers in our reality require. That's not the level you or the author possess. That's about the level I possessed in my childhood, nothing impressive. Mr. author, no one is stopping you from using your TI-99 today, but in fact you didn’t use it to write your article either. Why is that? Because the TI-99 is a tiny fraction of the function and complexity of a modern computer. Creating something close to a modern computer from discrete components with “part numbers you can look up” would be massively expensive, incredibly slow, and comparatively consume massive amounts of electricity vs today’s modern computers. It would seem we are getting a better deal from the same amount of energy spent with modern computers then. Does this seem right to you? It's philosophy and not logic, but I think you know that for getting something you pay something. There's no energy out of nowhere. Discrete components may not make sense. But maybe the insane efficiency we have is paid for with our future. It's made possible by centralization of economy and society and geopolitics, which wasn't needed to make TI-99. Do you think a surgeon understands how a CCD electronic camera works that is attached to their laparoscope? Is the surgeon un-educated that they aren’t fluent in circuit theory that allows the camera to display the guts of the patient they’re operating on? A surgeon has another specialist nearby, and that specialist doesn't just know these things, but also a lot of other knowledge necessary for them and the surgeon to unambiguously communicate, avoiding fatal mistakes. A bit more expense is spent here than just throwing a device at a surgeon not understanding how it works. A fair bit. Such gatekeeping! So unless you know the actual engineering principles behind a device you’re using, you shouldn’t be allowed to use it? Why not: Such respect! In truth, why wouldn't we trust students to make good use of understanding of their tools and the universe around them, since every human's corpus of knowledge is unique and wonderful, and not intentionally limit them. Innovation isn’t just creating new features or functionality. In fact, most I’d argue is taking existing features or functions and delivering them for substantially less cost/effort. Is change of policy innovation? In our world I see a lot of that. Driven by social and commercial and political interests naturally. As I’m reading this article, I am thinking about a farmer watching Mr. author eat a sandwich made with bread. A basic touch on your thoughts further is supposed to be part of school program in many countries. Perhaps, but these simple solutions also can frequently only offer simple functionality. Additionally, “the best engineering solutions” are often some of the most expensive. You don’t always need the best, and if best is the only option, then that may mean going without, which is worst than a mediocre solution and what we frequently had in the past. Does more complex functionality justify this? Who decides what we need? Who decides what is better and what is worse? This comes to policy decisions again. Authority. I think modern authority is misplaced, and were it not, we'd have an environment more similar to what the author wants. The reason your TI-99 and my c64 don’t require constant updates is because they were born before the concept of cybersecurity existed. If you’re going to have internet connected devices they its a near requirement to receive updates for security. Not all updates are for security. And an insecure device still can work years after years. If you don’t want internet connected devices, you can get those too, but they may be extremely expensive, so pony up the cash and put your money where your mouth is. Willpower is a tremendous limitation which people usually ignore. It's very hard to do this when everyone around doesn't. It would be very easy if you were choosing for yourself without network effects and interoperability requirements. So your argument for me doesn't work in your favor, when looking closely. (Similar to "if you disagree with this law, you can explain it at the police station".) Don’t think even a DEC PDP 11 mainframe sold in the same era was entirely known by a handful of people, and even that is a tiny fraction of functionality of today’s cheap commodity PCs. There's a graphical 2d space shooter game for PDP-11. Just saying. Also on its architecture some Soviet clones were made, in the form factor of PCs. With networking capabilities, they were used as command machines for other kinds of simpler PCs, or for production lines, and could be used as file shares, IIRC. I don't remember what that was called, but the absolutely weirdest part was seeing in comments people remembering using that in university computer labs and even in school computer labs, so that actually existed in the USSR. Kinda expensive though, even without Soviet inefficiency. It was made as a consumer electronics product with the least cost they thought they could get away with and have it still sell. Yes, which leads to different requirements today. This doesn't stop the discussion. That leads it to the question what changed. We are not obligated to take the perpetual centralization of economies and societies like some divine judgement. We don’t need most of these consumer electronics to last. Who's we? Are you deciding what will Intel RnD focus on, or what will Microsoft change in their OS and applications, or what will Apple produce? Authority, again. If it still works, why isn’t he using one? Could it be he wants the new features and functionality like the rest of us? Yes. It still works for offline purposes. It doesn't work where the modern web is not operable with it. This in my opinion reinforces their idea, not yours. These are my replies. I'll add my own principal opinion - a civilization can be as tall as a human forming it. Abstractions leak, and our world is continuous, so all abstractions leak. To know which do and don't for the particular purpose, you need to know principles. You can use abstractions without looking inside them to build a system inside an architecture, but you can't build an architecture and pick real world solutions for those abstractions without understanding those real wold solutions. Also horizontal connections between abstractions are much more tolerant to leaks than vertical ones. And there's no moral law forbidding us to look above our current environment to understand in which directions it may change.
  • 40 Stimmen
    8 Beiträge
    47 Aufrufe
    N
    That they didn't have enough technicians trained in this to be able to ensure that one was always available during working hours, or at least when it was glaringly obvious that one was going to be needed that day, is . . . both extremely and obviously stupid, and par for the course for a corp whose sole purpose is maximizing profit for the next quarter.
  • OSTP Has a Choice to Make: Science or Politics?

    Technology technology
    7
    1
    30 Stimmen
    7 Beiträge
    44 Aufrufe
    B
    Ye I expect so, I don't like the way this author just doesn't bother explaining her points. She just states that she disagrees and says they should be left to their own rules. Which is probably fine, but that's just lazy or she's not mentioning the difference for another reason
  • It is OutfinityGift project better then all NFTs?

    Technology technology
    1
    2
    1 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 347 Stimmen
    91 Beiträge
    120 Aufrufe
    E
    It kinda seems like you don’t understand the actual technology.
  • 33 Stimmen
    4 Beiträge
    27 Aufrufe
    A
    Phew okay /s
  • 0 Stimmen
    2 Beiträge
    22 Aufrufe
    B
    ... robo chomo?
  • If you value privacy, ditch Chrome and switch to Firefox now

    Technology technology
    3
    1 Stimmen
    3 Beiträge
    10 Aufrufe
    B
    Why did firefox kill pwa support on desktop?