Skip to content

Next-Gen Brain Implants Offer New Hope for Depression: AI and real-time neural feedback could transform treatments

Technology
32 23 0
  • This post did not contain any content.

    no

  • This post did not contain any content.

    Lol! What the actual fuck? No.

  • This post did not contain any content.

    Guys its great! My depression is solved. I now love President King Musk.

    Long Live the King! I am totally not being mind-controlled right now. 😵💫🤖

  • I am not depressed, but I will never get a brain implant for any reason. The brain is the final frontier of privacy, it is the one place I am free. If that is taken away I am no longer truly autonomous, I am no longer truly myself.

    I understand this is how older generations feel about lots of things, like smartphones, which I am writing this from, and I understand how stupid it sounds to say "but this is different!", but like... really. This is different. Whatever scale smartphones, drivers licenses, personalized ads, the internet, smart home speakers.... whatever scale all these things lie on in terms of "panopticon-ness", a brain implant is so exponentially further along that scale as to make all the others vanish to nothingness. You can't top a brain implant. A brain implant is a fundamentally unspeakable horror which would inevitably be used to subjugate entire peoples in a way so systematically flawless as to be almost irreversible.

    This is how it starts. First it will be used for undeniable goods, curing depression, psychological ailments, anxiety, and so on. Next thing you know it'll be an optional way to pay your check at restaurants, file your taxes, read a recipe - convenience. Then it will be the main way to do those things, and then suddenly it will be the only way to do those things. And once you have no choice but to use a brain implant to function in society, you'll have no choice but to accept "thought analytics" being reported to your government and corporations. No benefit is worth a brain implant, don't even think about it (but luckily, I can't tell if you do).

    "You can chain me, you can torture me, you can even destroy this body, but you will never imprison my mind."

    -Mahatma Ghandi

  • No no no no no

    AI can't even do a google search right.

    Keep that shit outta my head

    Don't worry, I am sure you will change your mind after you get the implants.

  • This post did not contain any content.

    So, their AI is confidently wrong over 60% of the time, and they thought implanting it into people's brains was a good idea?? Wtf???

  • I am not depressed, but I will never get a brain implant for any reason. The brain is the final frontier of privacy, it is the one place I am free. If that is taken away I am no longer truly autonomous, I am no longer truly myself.

    I understand this is how older generations feel about lots of things, like smartphones, which I am writing this from, and I understand how stupid it sounds to say "but this is different!", but like... really. This is different. Whatever scale smartphones, drivers licenses, personalized ads, the internet, smart home speakers.... whatever scale all these things lie on in terms of "panopticon-ness", a brain implant is so exponentially further along that scale as to make all the others vanish to nothingness. You can't top a brain implant. A brain implant is a fundamentally unspeakable horror which would inevitably be used to subjugate entire peoples in a way so systematically flawless as to be almost irreversible.

    This is how it starts. First it will be used for undeniable goods, curing depression, psychological ailments, anxiety, and so on. Next thing you know it'll be an optional way to pay your check at restaurants, file your taxes, read a recipe - convenience. Then it will be the main way to do those things, and then suddenly it will be the only way to do those things. And once you have no choice but to use a brain implant to function in society, you'll have no choice but to accept "thought analytics" being reported to your government and corporations. No benefit is worth a brain implant, don't even think about it (but luckily, I can't tell if you do).

    "What am I without my legs?" "What am I without my eyes?" "What am I without my arms?"

    What counts as "the real me" has been evolving for decades, if not centuries. I'm not volunteering for brain implants, but I'm not writing off the idea sometime in the future. As for AI, this is going to be more of the ML variety, not the LLM variety. Think more of "neurochemical levels have been trending in a certain direction for too long, release opposing neurochemicals to halt the spiral" and less of a little voice inside your head giving quite possibly incorrect answers to whatever you're thinking of.

    This is absolutely risky stuff, but less risky than recurring electroshock therapy? Hard for me to say. Note that the article is from nearly 2 decades ago, but there are articles in the news from just the last couple weeks.

  • This seems interesting, i'll read it fully after work if i don't forget.

    Something has me convinced i'm depressed but the only time i ever had the posibility to look for help they sort of just worked me towards the door and cut me off asap.

    But they ended up giving me some sort of anti psychotic medication, which definitely allowed me to get back on my feet at the time. (Shit was dark, i fell in a hole with covid, homelessness and unemployment alltogether with my wife and reached a point where i struggled so much i couldn't even get my ass to a job interview).

    But i still don't know what the cause of my struggles is, only that they've been around as long as i can remember. Some form of psychotic whatever wouldn't surprise me either looking at my mom and what she did. But from what i know (which isn't a lot obviously) it seems more like depression.

    I likely had undiagnosed depression for decades before I got treatment, from a GP, no less, after being dismissed by a psychiatrist. If you have concerns about your health, keep trying to get help, as long as you're able.

  • "What am I without my legs?" "What am I without my eyes?" "What am I without my arms?"

    What counts as "the real me" has been evolving for decades, if not centuries. I'm not volunteering for brain implants, but I'm not writing off the idea sometime in the future. As for AI, this is going to be more of the ML variety, not the LLM variety. Think more of "neurochemical levels have been trending in a certain direction for too long, release opposing neurochemicals to halt the spiral" and less of a little voice inside your head giving quite possibly incorrect answers to whatever you're thinking of.

    This is absolutely risky stuff, but less risky than recurring electroshock therapy? Hard for me to say. Note that the article is from nearly 2 decades ago, but there are articles in the news from just the last couple weeks.

    Those are some good nuances that definitely require a nuanced response and forced me to refine my thinking, thank you! I'm actually not claiming that the brain is the sole boundary of the real me, rather it is the majority of me, but my body is a contributor. The real me does change as my body changes, just in less meaningful ways. Likewise some changes in the brain change the real me more than others. However, regardless of what constitutes the real me or not, (and believe me, the philosophical rabbit hole there is one I love to explore), in this case I'm really just talking about the straightforward immediate implications of a brain implant on my privacy. An arm implant would also be quite bad in this regard, but a brain implant is clearly worse.

    There have already been systems that can display very rough, garbled images of what people are thinking of. I'm less worried about an implant that tells me what to do or controls me directly, and more worried about an implant that has a pretty accurate picture of my thoughts and reports it to authorities. It's surely possible to build a system that can approximate positive or negative mood states, and in combination this is very dangerous. If the government can tell that I'm happy when I think about Luigi Mangione, then they can respond to that information however they want. Eventually, in the same way that I am conditioned by the panopticon to stop at stop sign, even in the middle of a desolate desert where I can see for miles around that there are no cars, no police, no cameras - no anything that could possibly make a difference to me running the stop sign - the system will similarly condition automatic compliance in thoughts themselves. That is, compliance is brought about not by any actual exertion of power or force, but merely by the omnipresent possibility of its exertion.

    (For this we only need moderately complex brain implants, not sophisticated ones that actually control us physiologically.)

  • Those are some good nuances that definitely require a nuanced response and forced me to refine my thinking, thank you! I'm actually not claiming that the brain is the sole boundary of the real me, rather it is the majority of me, but my body is a contributor. The real me does change as my body changes, just in less meaningful ways. Likewise some changes in the brain change the real me more than others. However, regardless of what constitutes the real me or not, (and believe me, the philosophical rabbit hole there is one I love to explore), in this case I'm really just talking about the straightforward immediate implications of a brain implant on my privacy. An arm implant would also be quite bad in this regard, but a brain implant is clearly worse.

    There have already been systems that can display very rough, garbled images of what people are thinking of. I'm less worried about an implant that tells me what to do or controls me directly, and more worried about an implant that has a pretty accurate picture of my thoughts and reports it to authorities. It's surely possible to build a system that can approximate positive or negative mood states, and in combination this is very dangerous. If the government can tell that I'm happy when I think about Luigi Mangione, then they can respond to that information however they want. Eventually, in the same way that I am conditioned by the panopticon to stop at stop sign, even in the middle of a desolate desert where I can see for miles around that there are no cars, no police, no cameras - no anything that could possibly make a difference to me running the stop sign - the system will similarly condition automatic compliance in thoughts themselves. That is, compliance is brought about not by any actual exertion of power or force, but merely by the omnipresent possibility of its exertion.

    (For this we only need moderately complex brain implants, not sophisticated ones that actually control us physiologically.)

    I absolutely think that privacy within your own mind should be inviolable (trusting corporations and even government to agree is laughable). Iain Banks' Culture series explores some of these implications, as well as who should be in control of your mental state. It's messy and hard, and is one of the reasons I currently wouldn't get a brain implant. I might change my mind if I had ALS, for instance.

  • So, their AI is confidently wrong over 60% of the time, and they thought implanting it into people's brains was a good idea?? Wtf???

    All LLMs are AI, all AI is not an LLM.

  • Tell me you don’t understand what depression is without telling me you don’t understand what depression is. You can be depressed while living for free on a beach with no responsibilities. To suggest you can fix everyone’s depression with external changes is the height of “just go outside and you’ll feel better.”

  • Tell me you don’t understand what depression is without telling me you don’t understand what depression is. You can be depressed while living for free on a beach with no responsibilities. To suggest you can fix everyone’s depression with external changes is the height of “just go outside and you’ll feel better.”

    As someone who is literally living where others go on holidays while depressed let me tell you my depression is very much a response to the world being a rotting shithole. I am not sad because my life sucks but because so many others are suffering and I feel powerless to change it. The narrative of 'chemical imbalance' is a very reductive and misleading one.

    The feeling of powerlessness and disconnect also points to the cure I find for myself. Instead of implanting experimental BS inventions into my brain I try to be a force of connection, community and hope for others. There is very few things I can do as a single tiny person, but in these very small things lies the power of change for the better.

  • Depression replaced with horror?

    I'll take it.

    Horror might be more entertaining than depression, but the sheer idea of letting some techbro implant shit in my brain is so ridiculous, I'd probably try DIY lobotomy before I consider the AI option.

  • Chronic depression since a traumatic event trigger in 1989 here. They can shove those chips up their own arse.

    Depressed ever since puberty when I realised that Hollywood isn't real life.

  • Horror might be more entertaining than depression, but the sheer idea of letting some techbro implant shit in my brain is so ridiculous, I'd probably try DIY lobotomy before I consider the AI option.

    Lobotomy is a bit extreme. Try Trepanning first.

    Trepanning was sometimes performed on people who were behaving in a manner that was considered abnormal. In some ancient societies it was believed this released the evil spirits that were to blame.

  • Imagine witholding a medicine from a sick person, telling them it's the world that's broken. That's some Mother Teresa level evil.

    Wanting full control over the sick is also some Mother Teresa evil shit. It’s bad enough that life-saving medication is gate-kept by patents and pharmaceutical companies thriving on suffering (oh look, there she is again) but now people that suffer should give up access to what makes them them, their entire personhood, to some tech-bro ingrate? Is that truly the best option?

    I’d rather die.

  • This post did not contain any content.

    Nope I'd definitely kill myself before letting an ai fuck with my brain

  • Nope I'd definitely kill myself before letting an ai fuck with my brain

    In a sense, AI is already fucking with everyone's brain when it comes to mass-produced ads and propaganda.

  • This post did not contain any content.

    This sounds like an absolute nightmare. Listen to the techbro leaders talk about the general population, and imagine them owning what manages your brain... Ieam electrically, as opposed to indirectly.

  • Pornaroma Review a Detailed Comparison with Top Adult Sites

    Technology technology
    1
    2
    4 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • 𝗙𝗼𝗼𝗺 & Doom: “Brain in a box in a basement”

    Technology technology
    1
    7 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • No, Social Media is Not Porn

    Technology technology
    3
    1
    21 Stimmen
    3 Beiträge
    15 Aufrufe
    Z
    This feels dystopian and like overreach. But that said, there definitely is some porn on the 4 platforms they cited. It's an excuse sure, but let's also not deny reality.
  • 311 Stimmen
    37 Beiträge
    18 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • 635 Stimmen
    75 Beiträge
    24 Aufrufe
    D
    theyll only stop selling politicians and block that
  • 94 Stimmen
    1 Beiträge
    4 Aufrufe
    Niemand hat geantwortet
  • 61 Stimmen
    11 Beiträge
    11 Aufrufe
    K
    If you use LLMs like they should be, i.e. as autocomplete, they're helpful. Classic autocomplete can't see me type "import" and correctly guess that I want to import a file that I just created, but Copilot can. You shouldn't expect it to understand code, but it can type more quickly than you and plug the right things in more often than not.
  • Microsoft Bans Employees From Using DeepSeek App

    Technology technology
    11
    1
    121 Stimmen
    11 Beiträge
    11 Aufrufe
    L
    (Premise - suppose I accept that there is such a definable thing as capitalism) I'm not sure why you feel the need to state this in a discussion that already assumes it as a necessary precondition of, but, uh, you do you. People blaming capitalism for everything then build a country that imports grain, while before them and after them it’s among the largest exporters on the planet (if we combine Russia and Ukraine for the “after” metric, no pun intended). ...what? What does this have to do with literally anything, much less my comment about innovation/competition? Even setting aside the wild-assed assumptions you're making about me criticizing capitalism means I 'blame [it] for everything', this tirade you've launched into, presumably about Ukraine and the USSR, has no bearing on anything even tangentially related to this conversation. People praising capitalism create conditions in which there’s no reason to praise it. Like, it’s competitive - they kill competitiveness with patents, IP, very complex legal systems. It’s self-regulating and self-optimizing - they make regulations and do bailouts preventing sick companies from dying, make laws after their interests, then reactively make regulations to make conditions with them existing bearable, which have a side effect of killing smaller companies. Please allow me to reiterate: ...what? Capitalists didn't build literally any of those things, governments did, and capitalists have been trying to escape, subvert, or dismantle those systems at every turn, so this... vain, confusing attempt to pin a medal on capitalism's chest for restraining itself is not only wrong, it fails to understand basic facts about history. It's the opposite of self-regulating because it actively seeks to dismantle regulations (environmental, labor, wage, etc), and the only thing it optimizes for is the wealth of oligarchs, and maybe if they're lucky, there will be a few crumbs left over for their simps. That’s the problem, both “socialist” and “capitalist” ideal systems ignore ape power dynamics. I'm going to go ahead an assume that 'the problem' has more to do with assuming that complex interacting systems can be simplified to 'ape (or any other animal's) power dynamics' than with failing to let the richest people just do whatever they want. Such systems should be designed on top of the fact that jungle law is always allowed So we should just be cool with everybody being poor so Jeff Bezos or whoever can upgrade his megayacht to a gigayacht or whatever? Let me say this in the politest way I know how: LOL no. Also, do you remember when I said this? ‘Won’t someone please think of the billionaires’ is wearing kinda thin You know, right before you went on this very long-winded, surreal, barely-coherent ramble? Did you imagine I would be convinced by literally any of it when all it amounts to is one giant, extraneous, tedious equivalent of 'Won't someone please think of the billionaires?' Simp harder and I bet maybe you can get a crumb or two yourself.