Skip to content

Mole or cancer? The algorithm that gets one in three melanomas wrong and erases patients with dark skin

Technology
33 20 0
  • The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its "poor” and “dangerous" results. The algorithm has been trained only with data from white patients.

    Though I get the point, I would caution against calling "racism!" on AI not being able to detect molea or cancers well on people with darker skin; its harder to see darker areas on darker skins. That is physics, not racism

  • Though I get the point, I would caution against calling "racism!" on AI not being able to detect molea or cancers well on people with darker skin; its harder to see darker areas on darker skins. That is physics, not racism

    The racism is in training on white patients only, not in the abilities of the AI in this case.

  • The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its "poor” and “dangerous" results. The algorithm has been trained only with data from white patients.

    If someone with dark skin gets a real doctor to look at them, because it's known that this thing doesn't work at all in their case, then they are better off, really.

  • The racism is in training on white patients only, not in the abilities of the AI in this case.

    It's still not racism. The article itself says there is a lack of diversity in the training data. Training data will consist of 100% "obvious" pictures of skin cancers which is most books and online images I've looked into seems to be majority fair skinned individuals.

    "...such algorithms perform worse on black people, which is not due to technical problems, but to a lack of diversity in the training data..."

    Calling out things as racist really works to mask what a useful tool this could be to help screen for skin cancers.

  • Though I get the point, I would caution against calling "racism!" on AI not being able to detect molea or cancers well on people with darker skin; its harder to see darker areas on darker skins. That is physics, not racism

    Think more about the intended audience.

    This isn't about melanoma. The media has been pushing yellow journalism like this regarding AI since it became big.

    It's similar to how right wing media would push headlines about immigrant invasions. Hating on AI is the left's version of illegal immigrants.

  • It's still not racism. The article itself says there is a lack of diversity in the training data. Training data will consist of 100% "obvious" pictures of skin cancers which is most books and online images I've looked into seems to be majority fair skinned individuals.

    "...such algorithms perform worse on black people, which is not due to technical problems, but to a lack of diversity in the training data..."

    Calling out things as racist really works to mask what a useful tool this could be to help screen for skin cancers.

    Why is there a lack of robust training data across skin colors? Could it be that people with darker skin colors have less access to cutting edge medical care and research studies? Would be pretty racist.

    There is a similar bias in medical literature for genders. Many studies only consider males. That is sexist.

  • Think more about the intended audience.

    This isn't about melanoma. The media has been pushing yellow journalism like this regarding AI since it became big.

    It's similar to how right wing media would push headlines about immigrant invasions. Hating on AI is the left's version of illegal immigrants.

    Reading the article, it seems like badly regulated procurement processes with a company that did not meet the criteria to begin with.

    Poor results on people with darker skin colour are a known issue. However, the article says its training data containes ONLY white patients. The issue is not hate against AI, it's about what the tools can do with obviously problematic data.

    Unless the article is lying, these are valid concerns that have nothing to do with hating on AI, it has all to do with the minimal requirements for health AI tools.

  • Why is there a lack of robust training data across skin colors? Could it be that people with darker skin colors have less access to cutting edge medical care and research studies? Would be pretty racist.

    There is a similar bias in medical literature for genders. Many studies only consider males. That is sexist.

    I never said that the data gathered over decades wasn't biased in some way towards racial prejudice, discrimination, or social/cultural norms over history. I am quite aware of those things.

    But if a majority of the data you have at your disposal is from fair skinned people, and that's all you have...using it is not racist.

    Would you prefer that no data was used, or that we wait until the spectrum of people are fully represented in sufficient quantities, or that they make up stuff?

    This is what they have. Calling them racist for trying to help and create something to speed up diagnosis helps ALL people.

    The creators of this AI screening tool do not have any power over how the data was collected. They're not racist and it's quite ignorant to reason that they are.

  • Why is there a lack of robust training data across skin colors? Could it be that people with darker skin colors have less access to cutting edge medical care and research studies? Would be pretty racist.

    There is a similar bias in medical literature for genders. Many studies only consider males. That is sexist.

    Yeah, it does make it racist, but which party is performing the racist act? The AI, the AI trainer, the data collector, or the system that prioritises white patients? That's the important distinction that simply calling it racist fails to address.

  • Why is there a lack of robust training data across skin colors? Could it be that people with darker skin colors have less access to cutting edge medical care and research studies? Would be pretty racist.

    There is a similar bias in medical literature for genders. Many studies only consider males. That is sexist.

    My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that? To me, to say something is racist implies intent, where this situation could be that, but it could also be a case where it's just not racially diverse, which doesn't necessarily imply racism.

    There's a plethora of reasons that the dataset may be mostly fair skinned. To prattle off a couple that come to mind (all of this may be known, idk, these are ignorant possibilities on my side) perhaps more fair skinned people are susceptible so there's more data, like you mentioned that dark skinned individuals may have less options to get medical help, or maybe the dataset came from a region with not many dark skinned patients. Again, all ignorant speculation on my part, but I would say that none of those options inherently make the model racist, just not a good model. Maybe racist actions led to a bad dataset, but if that's out of the devs control, then I wouldn't personally put that negative on the model.

    Also, my interpretation of what racist means may differ, so there's that too. Or it could have all been done intentionally in which case, yea racist 100%

    Edit: I actually read the article. It sounds like they used public datasets that did have mostly Caucasian people. They also acknowledged that fair skinned people are significantly more likely to get melanoma, which does give some credence to the unbalanced dataset. It's still not ideal, but I would also say that maybe nobody should put all of their eggs in an AI screening tool, especially for something like cancer.

  • Reading the article, it seems like badly regulated procurement processes with a company that did not meet the criteria to begin with.

    Poor results on people with darker skin colour are a known issue. However, the article says its training data containes ONLY white patients. The issue is not hate against AI, it's about what the tools can do with obviously problematic data.

    Unless the article is lying, these are valid concerns that have nothing to do with hating on AI, it has all to do with the minimal requirements for health AI tools.

    Do you think any of these articles are lying or that these are not intended to generate certain sentiments towards immigrants?

    Are they valid concerns to be aware of?

    The reason I'm asking is because could you not say the same about any of these articles even though we all know exactly what the NY Post is doing?

    Compare it to posts on Lemmy with AI topics. They're the same.

  • My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that? To me, to say something is racist implies intent, where this situation could be that, but it could also be a case where it's just not racially diverse, which doesn't necessarily imply racism.

    There's a plethora of reasons that the dataset may be mostly fair skinned. To prattle off a couple that come to mind (all of this may be known, idk, these are ignorant possibilities on my side) perhaps more fair skinned people are susceptible so there's more data, like you mentioned that dark skinned individuals may have less options to get medical help, or maybe the dataset came from a region with not many dark skinned patients. Again, all ignorant speculation on my part, but I would say that none of those options inherently make the model racist, just not a good model. Maybe racist actions led to a bad dataset, but if that's out of the devs control, then I wouldn't personally put that negative on the model.

    Also, my interpretation of what racist means may differ, so there's that too. Or it could have all been done intentionally in which case, yea racist 100%

    Edit: I actually read the article. It sounds like they used public datasets that did have mostly Caucasian people. They also acknowledged that fair skinned people are significantly more likely to get melanoma, which does give some credence to the unbalanced dataset. It's still not ideal, but I would also say that maybe nobody should put all of their eggs in an AI screening tool, especially for something like cancer.

    My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that?

    A lot of AI research in general was first done by largely Caucasian students, so the datasets they used skewed that way, and other projects very often started from those initial datasets. The historical reason there are more students of that skin tone is because they have in general the most money to finance the schooling, and that's because past racism held African-American families back from accumulating wealth and accessing education, and that still affects their finances and chances today, assuming there is no racism still going on in scholarships and accepting students these days.

    Not saying this is specifically happening for this project, just a lot of AI projects in general. It causes issues with facial recognition in lots of apps for example.

  • My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that? To me, to say something is racist implies intent, where this situation could be that, but it could also be a case where it's just not racially diverse, which doesn't necessarily imply racism.

    There's a plethora of reasons that the dataset may be mostly fair skinned. To prattle off a couple that come to mind (all of this may be known, idk, these are ignorant possibilities on my side) perhaps more fair skinned people are susceptible so there's more data, like you mentioned that dark skinned individuals may have less options to get medical help, or maybe the dataset came from a region with not many dark skinned patients. Again, all ignorant speculation on my part, but I would say that none of those options inherently make the model racist, just not a good model. Maybe racist actions led to a bad dataset, but if that's out of the devs control, then I wouldn't personally put that negative on the model.

    Also, my interpretation of what racist means may differ, so there's that too. Or it could have all been done intentionally in which case, yea racist 100%

    Edit: I actually read the article. It sounds like they used public datasets that did have mostly Caucasian people. They also acknowledged that fair skinned people are significantly more likely to get melanoma, which does give some credence to the unbalanced dataset. It's still not ideal, but I would also say that maybe nobody should put all of their eggs in an AI screening tool, especially for something like cancer.

    Seems more like a byproduct of racism than racist in and of itself.

  • My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that?

    A lot of AI research in general was first done by largely Caucasian students, so the datasets they used skewed that way, and other projects very often started from those initial datasets. The historical reason there are more students of that skin tone is because they have in general the most money to finance the schooling, and that's because past racism held African-American families back from accumulating wealth and accessing education, and that still affects their finances and chances today, assuming there is no racism still going on in scholarships and accepting students these days.

    Not saying this is specifically happening for this project, just a lot of AI projects in general. It causes issues with facial recognition in lots of apps for example.

    They did touch on the facial recognition aspect as well. My main thing is, does that make the model racist if the source data is diverse? I'd argue that it's not, although racist decisions may have lead to a poor dataset.

  • Seems more like a byproduct of racism than racist in and of itself.

    I think that's a very possible likely hood, but as with most things, there are other factors that could affect the dataset as well.

  • It's still not racism. The article itself says there is a lack of diversity in the training data. Training data will consist of 100% "obvious" pictures of skin cancers which is most books and online images I've looked into seems to be majority fair skinned individuals.

    "...such algorithms perform worse on black people, which is not due to technical problems, but to a lack of diversity in the training data..."

    Calling out things as racist really works to mask what a useful tool this could be to help screen for skin cancers.

    Training data will consist of 100% "obvious" pictures of skin cancers

    Only if you're using shitty training data

  • Seems more like a byproduct of racism than racist in and of itself.

    Yes, we call that "structural racism".

  • The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its "poor” and “dangerous" results. The algorithm has been trained only with data from white patients.

    It's, going to erase me?

  • My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that? To me, to say something is racist implies intent, where this situation could be that, but it could also be a case where it's just not racially diverse, which doesn't necessarily imply racism.

    There's a plethora of reasons that the dataset may be mostly fair skinned. To prattle off a couple that come to mind (all of this may be known, idk, these are ignorant possibilities on my side) perhaps more fair skinned people are susceptible so there's more data, like you mentioned that dark skinned individuals may have less options to get medical help, or maybe the dataset came from a region with not many dark skinned patients. Again, all ignorant speculation on my part, but I would say that none of those options inherently make the model racist, just not a good model. Maybe racist actions led to a bad dataset, but if that's out of the devs control, then I wouldn't personally put that negative on the model.

    Also, my interpretation of what racist means may differ, so there's that too. Or it could have all been done intentionally in which case, yea racist 100%

    Edit: I actually read the article. It sounds like they used public datasets that did have mostly Caucasian people. They also acknowledged that fair skinned people are significantly more likely to get melanoma, which does give some credence to the unbalanced dataset. It's still not ideal, but I would also say that maybe nobody should put all of their eggs in an AI screening tool, especially for something like cancer.

    There is a more specific word for it: Institutional racism.

    Institutional racism, also known as systemic racism, is a form of institutional discrimination based on race or ethnic group and can include policies and practices that exist throughout a whole society or organization that result in and support a continued unfair advantage to some people and unfair or harmful treatment of others. It manifests as discrimination in areas such as criminal justice, employment, housing, healthcare, education and political representation.[1]

  • Yeah, it does make it racist, but which party is performing the racist act? The AI, the AI trainer, the data collector, or the system that prioritises white patients? That's the important distinction that simply calling it racist fails to address.

    There is a more specific word for it: Institutional racism.

    Institutional racism, also known as systemic racism, is a form of institutional discrimination based on race or ethnic group and can include policies and practices that exist throughout a whole society or organization that result in and support a continued unfair advantage to some people and unfair or harmful treatment of others. It manifests as discrimination in areas such as criminal justice, employment, housing, healthcare, education and political representation.[1]

  • You're not alone: This email from Google's Gemini team is concerning

    Technology technology
    298
    1
    837 Stimmen
    298 Beiträge
    244 Aufrufe
    M
    My understanding is that, in broad strokes... Aurora acts like a proxy or mirror that doesn't require you to sign in to get Google Play Store apps. It doesn't provide any other software besides what you specifically download from it, and it doesn't include any telemetry/tracking like normal Google Play Store would. microG is a reimplementation of Google Play services (the suite of proprietary background services that Google runs on normal Android phones). MicroG doesn't have the bloat and tracking and other closed source functionality, but rather acts as a stand-in that other apps can talk to (when they'd normally be talking to Google Play services). This has to be installed and configured and I would refer to the microG github or other documentation. GrapheneOS has its own sandboxed Google Play Services which is basically unmodified Google Play Services, crammed into its own sandbox with no special permissions, and a compatibility layer that retains some functionality while keeping it from being able to access app data with high level permissions like it would normally do on a vanilla Android phone.
  • 615 Stimmen
    254 Beiträge
    461 Aufrufe
    N
    That’s a very emphatic restatement of your initial claim. I can’t help but notice that, for all the fancy formatting, that wall of text doesn’t contain a single line which actually defines the difference between “learning” and “statistical optimization”. It just repeats the claim that they are different without supporting that claim in any way. Nothing in there, precludes the alternative hypothesis; that human learning is entirely (or almost entirely) an emergent property of “statistical optimization”. Without some definition of what the difference would be we can’t even theorize a test
  • Signal – an ethical replacement for WhatsApp

    Technology technology
    235
    1
    1k Stimmen
    235 Beiträge
    220 Aufrufe
    V
    What I said is that smart people can be convinced to move to another platform. Most of my friends are not technically inclined, but it was easy to make them use it, at least to chat with me. What you did is change "smart people" with "people who already want to move", which is not the same. You then said it's not something you can choose (as you cannot choose to be rich). But I answered that you can actually choose your friends. Never did I say people who are not interested in niche technologies are not smart. My statement can be rephrased in an equivalent statement "people who cannot be convinced to change are not smart", and I stand to it.
  • 35 Stimmen
    3 Beiträge
    9 Aufrufe
    T
    On the one hand, this is possibly dubious in that things that aren't generally considered to be part of defence will be used to inflate our defence spending numbers without actually spending more than previous (i.e. it's just a PR move) But on the other hand, this could be immensely useful in telling the NIMBYs to fuck right off. What's that, you're opposing infrastructure improvements, new housing, or wind turbines? Aw, diddums, that's too bad. This is deemed critical for national security, and thus the government can give it approval regardless. Sorry Bernard, sorry Mary, your petition against any change in the area is going nowhere.
  • Is the U.S. Vulnerable to a Drone Sneak Attack?

    Technology technology
    33
    1
    64 Stimmen
    33 Beiträge
    26 Aufrufe
    underpantsweevil@lemmy.worldU
    Heavy Lift drones can carry upwards of 55 lbs. And there's no reason you're limited to one.
  • 942 Stimmen
    196 Beiträge
    47 Aufrufe
    M
    In the end I popped up the terminal and used some pot command with some flag I can't remember to skip the login step on setup. I reckon there is good chance you aren't using windows 11 home though right?
  • 817 Stimmen
    41 Beiträge
    22 Aufrufe
    C
    And then price us out
  • 4 Stimmen
    2 Beiträge
    6 Aufrufe
    M
    Epic is a piece of shit company. The only reason they are fighting this fight with Apple is because they want some of Apple’s platform fees for themselves. Period. The fact that they managed to convince a bunch of simpletons that they are somehow Robin Hood coming to free them from the tyrant (who was actually protecting all those users all along) is laughable. Apple created the platform, Apple managed it, curated it, and controlled it. That gives them the right to profit from it. You might dislike that but — guess what? Nobody forced you to buy it. Buy Android if Fortnight is so important to you. Seriously. Please. We won’t miss you. Epic thinks they have a right to profit from Apple’s platform and not pay them for all the work they did to get it to be over 1 billion users. That is simply wrong. They should build their own platform and their own App Store and convince 1 billion people to use it. The reason they aren’t doing that is because they know they will never be as successful as Apple has been.