Skip to content

Mole or cancer? The algorithm that gets one in three melanomas wrong and erases patients with dark skin

Technology
34 20 0
  • Media forcing opinions using the same framework they always use.

    Regardless if it's the right or the left. Media is owned by people lik the Koch and bannons and Murdoch's even left leading media.

    They don't want the left using AI or building on it. They've been pushing a ton of articles to left leaning spaces using the same framework they use when it's election season and are looking to spin up the right wing base. It's all about taking jobs, threats to children, status quo.

  • It's, going to erase me?

    Who said that?

  • I never said that the data gathered over decades wasn't biased in some way towards racial prejudice, discrimination, or social/cultural norms over history. I am quite aware of those things.

    But if a majority of the data you have at your disposal is from fair skinned people, and that's all you have...using it is not racist.

    Would you prefer that no data was used, or that we wait until the spectrum of people are fully represented in sufficient quantities, or that they make up stuff?

    This is what they have. Calling them racist for trying to help and create something to speed up diagnosis helps ALL people.

    The creators of this AI screening tool do not have any power over how the data was collected. They're not racist and it's quite ignorant to reason that they are.

    They absolutely have power over the data sets.

    They could also fund research into other cancers and work with other countries like ones in Africa where there are more black people to sample.

    It's impossible to know intent but it does seem pretty intentionally eugenics of them to do this when it has been widely criticized and they refuse to fix it. So I'd say it is explicitly racist.

  • Though I get the point, I would caution against calling "racism!" on AI not being able to detect molea or cancers well on people with darker skin; its harder to see darker areas on darker skins. That is physics, not racism

    if only you read more than three sentences you'd see the problem is with the training data. instead you chose to make sure no one said the R word. ben shapiro would be proud

  • They absolutely have power over the data sets.

    They could also fund research into other cancers and work with other countries like ones in Africa where there are more black people to sample.

    It's impossible to know intent but it does seem pretty intentionally eugenics of them to do this when it has been widely criticized and they refuse to fix it. So I'd say it is explicitly racist.

    Eugenics??? That's crazy.

    So you'd prefer that they don't even start working with this screening method until we have gathered enough data to satisfy everyones representation?

    Let's just do that and not do anything until everyone is happy. Nothing will happen ever and we will all collectively suffer.

    How about this. Let's let the people with the knowledge use this "racist" data and help move the bar for health forward for everyone.

  • Eugenics??? That's crazy.

    So you'd prefer that they don't even start working with this screening method until we have gathered enough data to satisfy everyones representation?

    Let's just do that and not do anything until everyone is happy. Nothing will happen ever and we will all collectively suffer.

    How about this. Let's let the people with the knowledge use this "racist" data and help move the bar for health forward for everyone.

    It isn't crazy and it's the basis for bioethics, something I had to learn about when becoming a bioengineer who also worked with people who literally designed AI today and they continue to work with MIT, Google, and Stanford on machine learning... I have spoked extensively with these people about ethics and a large portion of any AI engineer's job is literally just ethics. Actually, a lot of engineering is learning ethics and accidents - they go hand in hand, like the Hotel Hyatt collapse.

    I never suggested they stop developing the screening technology, don't strawman, it's boring. I literally gave suggestions for how they can fix it and fix their data so it is no longer functioning as a tool of eugenics.

    Different case below, but related sentiment that AI is NOT a separate entity from its creators/engineers and they ABSOLUTELY should be held liable for the outcomes of what they engineer regardless of provable intent.

    You don’t think the people who make the generative algorithm have a duty to what it generates?

    And whatever you think anyway, the company itself shows that it feels obligated about what the AI puts out, because they are constantly trying to stop the AI from giving out bomb instructions and hate speech and illegal sexual content.

    The standard is not and was never if they were “entirely” at fault here. It’s whether they have any responsibility towards this (and we all here can see that they do indeed have some), and how much financially that’s worth in damages.

  • The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its "poor” and “dangerous" results. The algorithm has been trained only with data from white patients.

    you cant diagnosed melanoma just by the skin features alone, you need biopsy and gene tic testing too. furthermore, other types of melanoma do not have typical abcde signs sometimes.

    histopathology gives the accurate indication if its melonoma or something else, and how far it spread in the sample.

  • It isn't crazy and it's the basis for bioethics, something I had to learn about when becoming a bioengineer who also worked with people who literally designed AI today and they continue to work with MIT, Google, and Stanford on machine learning... I have spoked extensively with these people about ethics and a large portion of any AI engineer's job is literally just ethics. Actually, a lot of engineering is learning ethics and accidents - they go hand in hand, like the Hotel Hyatt collapse.

    I never suggested they stop developing the screening technology, don't strawman, it's boring. I literally gave suggestions for how they can fix it and fix their data so it is no longer functioning as a tool of eugenics.

    Different case below, but related sentiment that AI is NOT a separate entity from its creators/engineers and they ABSOLUTELY should be held liable for the outcomes of what they engineer regardless of provable intent.

    You don’t think the people who make the generative algorithm have a duty to what it generates?

    And whatever you think anyway, the company itself shows that it feels obligated about what the AI puts out, because they are constantly trying to stop the AI from giving out bomb instructions and hate speech and illegal sexual content.

    The standard is not and was never if they were “entirely” at fault here. It’s whether they have any responsibility towards this (and we all here can see that they do indeed have some), and how much financially that’s worth in damages.

    I know what bioethics is and how it applies to research and engineering. Your response doesn't seem to really get to the core of what I'm saying: which is that the people making the AI tool aren't racist.

    Help me out: what do the researchers creating this AI screening tool in its current form (with racist data) have to do with it being a tool of eugenics? That's quite a damning statement.

    I'm assuming you have a much deeper understanding of what kind of data this AI screening tool uses and the finances and whatever else that goes into it. I feel that the whole "talk with Africa" to balance out the data is not great sounding and is overly simplified.

    Do you really believe that the people who created this AI screening tool should be punished for using this racist data, regardless of provable intent? Even if it saved lives?

    Does this kind of punishment apply to the doctor who used this unethical AI tool? His knowledge has to go into building it up somehow. Is he, by extension, a tool of eugenics too?

    I understand ethical obligations and that we need higher standards moving forward in society. But even if the data right now is unethical, and it saves lives, we should absolutely use it.

  • I know what bioethics is and how it applies to research and engineering. Your response doesn't seem to really get to the core of what I'm saying: which is that the people making the AI tool aren't racist.

    Help me out: what do the researchers creating this AI screening tool in its current form (with racist data) have to do with it being a tool of eugenics? That's quite a damning statement.

    I'm assuming you have a much deeper understanding of what kind of data this AI screening tool uses and the finances and whatever else that goes into it. I feel that the whole "talk with Africa" to balance out the data is not great sounding and is overly simplified.

    Do you really believe that the people who created this AI screening tool should be punished for using this racist data, regardless of provable intent? Even if it saved lives?

    Does this kind of punishment apply to the doctor who used this unethical AI tool? His knowledge has to go into building it up somehow. Is he, by extension, a tool of eugenics too?

    I understand ethical obligations and that we need higher standards moving forward in society. But even if the data right now is unethical, and it saves lives, we should absolutely use it.

    I addressed that point by saying their intent to be racist or not is irrelevant when we focus on impact to the actual victims (ie systemic racism). Who cares about the individual engineer's morality and thoughts when we have provable, measurable evidence of racial disparity that we can correct easily?

    It literally allows black people to die and saves white people more. That's eugenics.

    It is fine to coordinate with universities in like Kenya, what are you talking about?

    I never said shit about the makers of THIS tool being punished! Learn to read! I said the tool needs fixed!

    Like seriously you are constantly taking the position of the white male, empathizing, then running interference for him as if he was you and as if I'm your mommy about to spank you. Stop being weird and projecting your bullshit.

    Yes, doctors who use this tool on their black patients and white patients equally would be perofmring eugenics, just like the doctors who sterikized indigenous women because they were poor were doing the same. Again, intent and your ego isn't relevanf when we focus on impacts to victims and how to help them.

    We should demand they work in a very meaningful way to get the data to be as good for black people as their #1 priority, ie doing studies and collecting that data

  • I addressed that point by saying their intent to be racist or not is irrelevant when we focus on impact to the actual victims (ie systemic racism). Who cares about the individual engineer's morality and thoughts when we have provable, measurable evidence of racial disparity that we can correct easily?

    It literally allows black people to die and saves white people more. That's eugenics.

    It is fine to coordinate with universities in like Kenya, what are you talking about?

    I never said shit about the makers of THIS tool being punished! Learn to read! I said the tool needs fixed!

    Like seriously you are constantly taking the position of the white male, empathizing, then running interference for him as if he was you and as if I'm your mommy about to spank you. Stop being weird and projecting your bullshit.

    Yes, doctors who use this tool on their black patients and white patients equally would be perofmring eugenics, just like the doctors who sterikized indigenous women because they were poor were doing the same. Again, intent and your ego isn't relevanf when we focus on impacts to victims and how to help them.

    We should demand they work in a very meaningful way to get the data to be as good for black people as their #1 priority, ie doing studies and collecting that data

    Define eugenics for me, please.

    You're saying the tool in its current form with it's data "seems pretty intentionally eugenics" and..."a tool for eugenics". And since you said the people who made that data, the AI tool, and those who are now using it are also responsible for anything bad ...they are by your supposed extension eugenicists/racists and whatever other grotesque and immoral thing you can think of. Because your link says that regardless of intention, the AI engineers should ABSOLUTELY be punished.

    They have to fix it, of course, so it can become something other than a tool for eugenics as it is currently. Can you see where I think your argument goes way beyond rational?

    Would I have had this conversation with you if the tool worked really well on only black people and allowed white people to die disproportionately? I honestly can't say. But I feel you would be quiet on the issue. Am I wrong?

    I don't think using the data, as it is, to save lives makes you racist or supports eugenics. You seem to believe it does. That's what I'm getting after. That's why I think we are reading different books.

    Once again...define eugenics for me, please.

    Regardless, nothing I have said means that I don't recognize institutional racism and that I don't want the data set to become more evenly distributed so it takes into consideration the full spectrum of human life and helps ALL people.

  • 348 Stimmen
    72 Beiträge
    8 Aufrufe
    M
    Sure, the internet is more practical, and the odds of being caught in the time required to execute a decent strike plan, even one as vague as: "we're going to Amerika and we're going to hit 50 high profile targets on July 4th, one in every state" (Dear NSA analyst, this is entirely hypothetical) so your agents spread to the field and start assessing from the ground the highest impact targets attainable with their resources, extensive back and forth from the field to central command daily for 90 days of prep, but it's being carried out on 270 different active social media channels as innocuous looking photo exchanges with 540 pre-arranged algorithms hiding the messages in the noise of the image bits. Chances of security agencies picking this up from the communication itself? About 100x less than them noticing 50 teams of activists deployed to 50 states at roughly the same time, even if they never communicate anything. HF (more often called shortwave) is well suited for the numbers game. A deep cover agent lying in wait, potentially for years. Only "tell" is their odd habit of listening to the radio most nights. All they're waiting for is a binary message: if you hear the sequence 3 17 22 you are to make contact for further instructions. That message may come at any time, or may not come for a decade. These days, you would make your contact for further instructions via internet, and sure, it would be more practical to hide the "make contact" signal in the internet too, but shortwave is a longstanding tech with known operating parameters.
  • Trump's Corrupt Plan to Steal Rural America's Broadband Future

    Technology technology
    13
    1
    196 Stimmen
    13 Beiträge
    24 Aufrufe
    K
    I wonder how betrayed the people in the Appalachian feel when their supposed "own" Vance stood for this.
  • Something I noticed

    Technology technology
    2
    3 Stimmen
    2 Beiträge
    7 Aufrufe
    H
    This would be better suited in some casual ranting community. Or one concerned with tech bros. I think it's completely off topic here.
  • Canadian telecom hacked by suspected China state group

    Technology technology
    3
    1
    57 Stimmen
    3 Beiträge
    10 Aufrufe
    M
    While this news is both expected and unsettling, I'm pretty keen on how our gov has this info available to the public. And the site itself - such a vast resource for security info, tools, etc. Not all of our gov nor all departments are something to behold, but our cyber teams are top notch. And holy shit: https://github.com/CybercentreCanada
  • Websites Are Tracking You Via Browser Fingerprinting

    Technology technology
    41
    1
    296 Stimmen
    41 Beiträge
    49 Aufrufe
    M
    Lets you question how digital stalking is still allowed?
  • Meta is now a defense contractor

    Technology technology
    54
    1
    361 Stimmen
    54 Beiträge
    38 Aufrufe
    B
    Best decision ever for a company. The US gov pisses away billions of their taxpayers money and buys all the low quality crap from the MIL without questions.
  • 73 Stimmen
    38 Beiträge
    32 Aufrufe
    F
    For sure they are! Meta more then the others though
  • Short summary of feature phone market in 2025

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet