Skip to content

Mole or cancer? The algorithm that gets one in three melanomas wrong and erases patients with dark skin

Technology
37 20 0
  • Though I get the point, I would caution against calling "racism!" on AI not being able to detect molea or cancers well on people with darker skin; its harder to see darker areas on darker skins. That is physics, not racism

    if only you read more than three sentences you'd see the problem is with the training data. instead you chose to make sure no one said the R word. ben shapiro would be proud

  • They absolutely have power over the data sets.

    They could also fund research into other cancers and work with other countries like ones in Africa where there are more black people to sample.

    It's impossible to know intent but it does seem pretty intentionally eugenics of them to do this when it has been widely criticized and they refuse to fix it. So I'd say it is explicitly racist.

    Eugenics??? That's crazy.

    So you'd prefer that they don't even start working with this screening method until we have gathered enough data to satisfy everyones representation?

    Let's just do that and not do anything until everyone is happy. Nothing will happen ever and we will all collectively suffer.

    How about this. Let's let the people with the knowledge use this "racist" data and help move the bar for health forward for everyone.

  • Eugenics??? That's crazy.

    So you'd prefer that they don't even start working with this screening method until we have gathered enough data to satisfy everyones representation?

    Let's just do that and not do anything until everyone is happy. Nothing will happen ever and we will all collectively suffer.

    How about this. Let's let the people with the knowledge use this "racist" data and help move the bar for health forward for everyone.

    It isn't crazy and it's the basis for bioethics, something I had to learn about when becoming a bioengineer who also worked with people who literally designed AI today and they continue to work with MIT, Google, and Stanford on machine learning... I have spoked extensively with these people about ethics and a large portion of any AI engineer's job is literally just ethics. Actually, a lot of engineering is learning ethics and accidents - they go hand in hand, like the Hotel Hyatt collapse.

    I never suggested they stop developing the screening technology, don't strawman, it's boring. I literally gave suggestions for how they can fix it and fix their data so it is no longer functioning as a tool of eugenics.

    Different case below, but related sentiment that AI is NOT a separate entity from its creators/engineers and they ABSOLUTELY should be held liable for the outcomes of what they engineer regardless of provable intent.

    You don’t think the people who make the generative algorithm have a duty to what it generates?

    And whatever you think anyway, the company itself shows that it feels obligated about what the AI puts out, because they are constantly trying to stop the AI from giving out bomb instructions and hate speech and illegal sexual content.

    The standard is not and was never if they were “entirely” at fault here. It’s whether they have any responsibility towards this (and we all here can see that they do indeed have some), and how much financially that’s worth in damages.

  • The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its "poor” and “dangerous" results. The algorithm has been trained only with data from white patients.

    you cant diagnosed melanoma just by the skin features alone, you need biopsy and gene tic testing too. furthermore, other types of melanoma do not have typical abcde signs sometimes.

    histopathology gives the accurate indication if its melonoma or something else, and how far it spread in the sample.

  • It isn't crazy and it's the basis for bioethics, something I had to learn about when becoming a bioengineer who also worked with people who literally designed AI today and they continue to work with MIT, Google, and Stanford on machine learning... I have spoked extensively with these people about ethics and a large portion of any AI engineer's job is literally just ethics. Actually, a lot of engineering is learning ethics and accidents - they go hand in hand, like the Hotel Hyatt collapse.

    I never suggested they stop developing the screening technology, don't strawman, it's boring. I literally gave suggestions for how they can fix it and fix their data so it is no longer functioning as a tool of eugenics.

    Different case below, but related sentiment that AI is NOT a separate entity from its creators/engineers and they ABSOLUTELY should be held liable for the outcomes of what they engineer regardless of provable intent.

    You don’t think the people who make the generative algorithm have a duty to what it generates?

    And whatever you think anyway, the company itself shows that it feels obligated about what the AI puts out, because they are constantly trying to stop the AI from giving out bomb instructions and hate speech and illegal sexual content.

    The standard is not and was never if they were “entirely” at fault here. It’s whether they have any responsibility towards this (and we all here can see that they do indeed have some), and how much financially that’s worth in damages.

    I know what bioethics is and how it applies to research and engineering. Your response doesn't seem to really get to the core of what I'm saying: which is that the people making the AI tool aren't racist.

    Help me out: what do the researchers creating this AI screening tool in its current form (with racist data) have to do with it being a tool of eugenics? That's quite a damning statement.

    I'm assuming you have a much deeper understanding of what kind of data this AI screening tool uses and the finances and whatever else that goes into it. I feel that the whole "talk with Africa" to balance out the data is not great sounding and is overly simplified.

    Do you really believe that the people who created this AI screening tool should be punished for using this racist data, regardless of provable intent? Even if it saved lives?

    Does this kind of punishment apply to the doctor who used this unethical AI tool? His knowledge has to go into building it up somehow. Is he, by extension, a tool of eugenics too?

    I understand ethical obligations and that we need higher standards moving forward in society. But even if the data right now is unethical, and it saves lives, we should absolutely use it.

  • I know what bioethics is and how it applies to research and engineering. Your response doesn't seem to really get to the core of what I'm saying: which is that the people making the AI tool aren't racist.

    Help me out: what do the researchers creating this AI screening tool in its current form (with racist data) have to do with it being a tool of eugenics? That's quite a damning statement.

    I'm assuming you have a much deeper understanding of what kind of data this AI screening tool uses and the finances and whatever else that goes into it. I feel that the whole "talk with Africa" to balance out the data is not great sounding and is overly simplified.

    Do you really believe that the people who created this AI screening tool should be punished for using this racist data, regardless of provable intent? Even if it saved lives?

    Does this kind of punishment apply to the doctor who used this unethical AI tool? His knowledge has to go into building it up somehow. Is he, by extension, a tool of eugenics too?

    I understand ethical obligations and that we need higher standards moving forward in society. But even if the data right now is unethical, and it saves lives, we should absolutely use it.

    I addressed that point by saying their intent to be racist or not is irrelevant when we focus on impact to the actual victims (ie systemic racism). Who cares about the individual engineer's morality and thoughts when we have provable, measurable evidence of racial disparity that we can correct easily?

    It literally allows black people to die and saves white people more. That's eugenics.

    It is fine to coordinate with universities in like Kenya, what are you talking about?

    I never said shit about the makers of THIS tool being punished! Learn to read! I said the tool needs fixed!

    Like seriously you are constantly taking the position of the white male, empathizing, then running interference for him as if he was you and as if I'm your mommy about to spank you. Stop being weird and projecting your bullshit.

    Yes, doctors who use this tool on their black patients and white patients equally would be perofmring eugenics, just like the doctors who sterikized indigenous women because they were poor were doing the same. Again, intent and your ego isn't relevanf when we focus on impacts to victims and how to help them.

    We should demand they work in a very meaningful way to get the data to be as good for black people as their #1 priority, ie doing studies and collecting that data

  • I addressed that point by saying their intent to be racist or not is irrelevant when we focus on impact to the actual victims (ie systemic racism). Who cares about the individual engineer's morality and thoughts when we have provable, measurable evidence of racial disparity that we can correct easily?

    It literally allows black people to die and saves white people more. That's eugenics.

    It is fine to coordinate with universities in like Kenya, what are you talking about?

    I never said shit about the makers of THIS tool being punished! Learn to read! I said the tool needs fixed!

    Like seriously you are constantly taking the position of the white male, empathizing, then running interference for him as if he was you and as if I'm your mommy about to spank you. Stop being weird and projecting your bullshit.

    Yes, doctors who use this tool on their black patients and white patients equally would be perofmring eugenics, just like the doctors who sterikized indigenous women because they were poor were doing the same. Again, intent and your ego isn't relevanf when we focus on impacts to victims and how to help them.

    We should demand they work in a very meaningful way to get the data to be as good for black people as their #1 priority, ie doing studies and collecting that data

    Define eugenics for me, please.

    You're saying the tool in its current form with it's data "seems pretty intentionally eugenics" and..."a tool for eugenics". And since you said the people who made that data, the AI tool, and those who are now using it are also responsible for anything bad ...they are by your supposed extension eugenicists/racists and whatever other grotesque and immoral thing you can think of. Because your link says that regardless of intention, the AI engineers should ABSOLUTELY be punished.

    They have to fix it, of course, so it can become something other than a tool for eugenics as it is currently. Can you see where I think your argument goes way beyond rational?

    Would I have had this conversation with you if the tool worked really well on only black people and allowed white people to die disproportionately? I honestly can't say. But I feel you would be quiet on the issue. Am I wrong?

    I don't think using the data, as it is, to save lives makes you racist or supports eugenics. You seem to believe it does. That's what I'm getting after. That's why I think we are reading different books.

    Once again...define eugenics for me, please.

    Regardless, nothing I have said means that I don't recognize institutional racism and that I don't want the data set to become more evenly distributed so it takes into consideration the full spectrum of human life and helps ALL people.

  • Define eugenics for me, please.

    You're saying the tool in its current form with it's data "seems pretty intentionally eugenics" and..."a tool for eugenics". And since you said the people who made that data, the AI tool, and those who are now using it are also responsible for anything bad ...they are by your supposed extension eugenicists/racists and whatever other grotesque and immoral thing you can think of. Because your link says that regardless of intention, the AI engineers should ABSOLUTELY be punished.

    They have to fix it, of course, so it can become something other than a tool for eugenics as it is currently. Can you see where I think your argument goes way beyond rational?

    Would I have had this conversation with you if the tool worked really well on only black people and allowed white people to die disproportionately? I honestly can't say. But I feel you would be quiet on the issue. Am I wrong?

    I don't think using the data, as it is, to save lives makes you racist or supports eugenics. You seem to believe it does. That's what I'm getting after. That's why I think we are reading different books.

    Once again...define eugenics for me, please.

    Regardless, nothing I have said means that I don't recognize institutional racism and that I don't want the data set to become more evenly distributed so it takes into consideration the full spectrum of human life and helps ALL people.

    Yeah I'm done educating you tbh. Not worth my time when you're arguing in bad faith.

    Learn what a strawman is. 90% of your post was strawman after strawman.

    Define strawman for me, kiddo. Then re-read your above comment. I counted 6, can you find all 6 strawman arguments in your comment?

    The conversation was never about you or your ego, but youve thoroughly convinced me with this conversation that you are probably both racist and a eugenicist - hit dog hollers and you seriously keep identifying yourself as the racist eugenicist here with no prompting from anyone else. Ig if that's who you are then, whatever. I dont talk to eugenicist racists either.

  • Yeah I'm done educating you tbh. Not worth my time when you're arguing in bad faith.

    Learn what a strawman is. 90% of your post was strawman after strawman.

    Define strawman for me, kiddo. Then re-read your above comment. I counted 6, can you find all 6 strawman arguments in your comment?

    The conversation was never about you or your ego, but youve thoroughly convinced me with this conversation that you are probably both racist and a eugenicist - hit dog hollers and you seriously keep identifying yourself as the racist eugenicist here with no prompting from anyone else. Ig if that's who you are then, whatever. I dont talk to eugenicist racists either.

    I expected more from an educated person.

    But if you don't want to define the word and cut off the conversation, then you've just left me with the belief you are using eugenics as a "scary" word hoping to sound smart. I believe you can represent your field better.

    I hope you have a good one.

    For anybody still reading: The AI tool is not for eugenics, the researchers should not be punished, it's not racist to use unethical data, and it helps people who might otherwise die to a horrible disease. It doesn't help all the people we want it to right now, but hopefully, in the future it will be an amazing tool for everyone.

  • I expected more from an educated person.

    But if you don't want to define the word and cut off the conversation, then you've just left me with the belief you are using eugenics as a "scary" word hoping to sound smart. I believe you can represent your field better.

    I hope you have a good one.

    For anybody still reading: The AI tool is not for eugenics, the researchers should not be punished, it's not racist to use unethical data, and it helps people who might otherwise die to a horrible disease. It doesn't help all the people we want it to right now, but hopefully, in the future it will be an amazing tool for everyone.

    Lol

    Sure, if you declare it, it must be so. Other people can read and can see your strawmans. You just look pompous and egotistical.

  • 446 Stimmen
    81 Beiträge
    0 Aufrufe
    N
    Urgh, the ones that say "well my ice car can do 700 miles on a tank so until EV can do that I'm not doing it" annoy the hell out of me. I know damn well they're never driven that far without stopping at least once
  • 4 Stimmen
    5 Beiträge
    0 Aufrufe
    M
    Of course, if they’re in the army, can’t they be executed for treason and the like?
  • Authors petition publishers to curtail their use of AI

    Technology technology
    2
    74 Stimmen
    2 Beiträge
    6 Aufrufe
    M
    I’m sure publishers are all ears /s
  • 0 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • I Counted All of the Yurts in Mongolia Using Machine Learning

    Technology technology
    9
    17 Stimmen
    9 Beiträge
    21 Aufrufe
    G
    I'd say, when there's a policy and its goals aren't reached, that's a policy failure. If people don't like the policy, that's an issue but it's a separate issue. It doesn't seem likely that people prefer living in tents, though. But to be fair, the government may be doing the best it can. It's ranked "Flawed Democracy" by The Economist Democracy Index. That's really good, I'd say, considering the circumstances. They are placed slightly ahead of Argentina and Hungary. OP has this to say: Due to the large number of people moving to urban locations, it has been difficult for the government to build the infrastructure needed for them. The informal settlements that grew from this difficulty are now known as ger districts. There have been many efforts to formalize and develop these areas. The Law on Allocation of Land to Mongolian Citizens for Ownership, passed in 2002, allowed for existing ger district residents to formalize the land they settled, and allowed for others to receive land from the government into the future. Along with the privatization of land, the Mongolian government has been pushing for the development of ger districts into areas with housing blocks connected to utilities. The plan for this was published in 2014 as Ulaanbaatar 2020 Master Plan and Development Approaches for 2030. Although progress has been slow (Choi and Enkhbat 7), they have been making progress in building housing blocks in ger distrcts. Residents of ger districts sell or exchange their plots to developers who then build housing blocks on them. Often this is in exchange for an apartment in the building, and often the value of the apartment is less than the land they originally had (Choi and Enkhbat 15). Based on what I’ve read about the ger districts, they have been around since at least the 1970s, and progress on developing them has been slow. When ineffective policy results in a large chunk of the populace generationally living in yurts on the outskirts of urban areas, it’s clear that there is failure. Choi, Mack Joong, and Urandulguun Enkhbat. “Distributional Effects of Ger Area Redevelopment in Ulaanbaatar, Mongolia.” International Journal of Urban Sciences, vol. 24, no. 1, Jan. 2020, pp. 50–68. DOI.org (Crossref), https://doi.org/10.1080/12265934.2019.1571433.
  • Companies are using Ribbon AI, an AI interviewer to screen candidates.

    Technology technology
    52
    56 Stimmen
    52 Beiträge
    39 Aufrufe
    P
    I feel like I could succeed in an LLM selection process. I could sell my skills to a robot, could get an LLM to help. It's a long way ahead of keyword based automatic selectors At least an LLM is predictable, human judges are so variable
  • [paper] Evidence of a social evaluation penalty for using AI

    Technology technology
    10
    28 Stimmen
    10 Beiträge
    30 Aufrufe
    vendetta9076@sh.itjust.worksV
    I'm specifically talking about toil when it comes to my job as a software developer. I already know I need an if statement and a for loop all wrapped in a try catch. Rather then spending a couple minutes coding that I have cursor do it for me instantly then fill out the actual code. Or, ive written something in python and it needs to be converted to JavaScript. I can ask Claude to convert it one to one for me and test it, which comes back with either no errors or a very simple error I need to fix. It takes a minute. Instead I could have taken 15min to rewrite it myself and maybe make more mistakes that take longer.
  • Microsoft's AI Secretly Copying All Your Private Messages

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    17 Aufrufe
    S
    Forgive me for not explaining better. Here are the terms potentially needing explanation. Provisioning in this case is initial system setup, the kind of stuff you would do manually after a fresh install, but usually implies a regimented and repeatable process. Virtual Machine (VM) snapshots are like a save state in a game, and are often used to reset a virtual machine to a particular known-working condition. Preboot Execution Environment (PXE, aka ‘network boot’) is a network adapter feature that lets you boot a physical machine from a hosted network image rather than the usual installation on locally attached storage. It’s probably tucked away in your BIOS settings, but many computers have the feature since it’s a common requirement in commercial deployments. As with the VM snapshot described above, a PXE image is typically a known-working state that resets on each boot. Non-virtualized means not using hardware virtualization, and I meant specifically not running inside a virtual machine. Local-only means without a network or just not booting from a network-hosted image. Telemetry refers to data collecting functionality. Most software has it. Windows has a lot. Telemetry isn’t necessarily bad since it can, for example, help reveal and resolve bugs and usability problems, but it is easily (and has often been) abused by data-hungry corporations like MS, so disabling it is an advisable precaution. MS = Microsoft OSS = Open Source Software Group policies are administrative settings in Windows that control standards (for stuff like security, power management, licensing, file system and settings access, etc.) for user groups on a machine or network. Most users stick with the defaults but you can edit these yourself for a greater degree of control. Docker lets you run software inside “containers” to isolate them from the rest of the environment, exposing and/or virtualizing just the resources they need to run, and Compose is a related tool for defining one or more of these containers, how they interact, etc. To my knowledge there is no one-to-one equivalent for Windows. Obviously, many of these concepts relate to IT work, as are the use-cases I had in mind, but the software is simple enough for the average user if you just pick one of the premade playbooks. (The Atlas playbook is popular among gamers, for example.) Edit: added explanations for docker and telemetry