Skip to content

Mole or cancer? The algorithm that gets one in three melanomas wrong and erases patients with dark skin

Technology
33 20 0
  • The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its "poor” and “dangerous" results. The algorithm has been trained only with data from white patients.

    Though I get the point, I would caution against calling "racism!" on AI not being able to detect molea or cancers well on people with darker skin; its harder to see darker areas on darker skins. That is physics, not racism

  • Though I get the point, I would caution against calling "racism!" on AI not being able to detect molea or cancers well on people with darker skin; its harder to see darker areas on darker skins. That is physics, not racism

    The racism is in training on white patients only, not in the abilities of the AI in this case.

  • The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its "poor” and “dangerous" results. The algorithm has been trained only with data from white patients.

    If someone with dark skin gets a real doctor to look at them, because it's known that this thing doesn't work at all in their case, then they are better off, really.

  • The racism is in training on white patients only, not in the abilities of the AI in this case.

    It's still not racism. The article itself says there is a lack of diversity in the training data. Training data will consist of 100% "obvious" pictures of skin cancers which is most books and online images I've looked into seems to be majority fair skinned individuals.

    "...such algorithms perform worse on black people, which is not due to technical problems, but to a lack of diversity in the training data..."

    Calling out things as racist really works to mask what a useful tool this could be to help screen for skin cancers.

  • Though I get the point, I would caution against calling "racism!" on AI not being able to detect molea or cancers well on people with darker skin; its harder to see darker areas on darker skins. That is physics, not racism

    Think more about the intended audience.

    This isn't about melanoma. The media has been pushing yellow journalism like this regarding AI since it became big.

    It's similar to how right wing media would push headlines about immigrant invasions. Hating on AI is the left's version of illegal immigrants.

  • It's still not racism. The article itself says there is a lack of diversity in the training data. Training data will consist of 100% "obvious" pictures of skin cancers which is most books and online images I've looked into seems to be majority fair skinned individuals.

    "...such algorithms perform worse on black people, which is not due to technical problems, but to a lack of diversity in the training data..."

    Calling out things as racist really works to mask what a useful tool this could be to help screen for skin cancers.

    Why is there a lack of robust training data across skin colors? Could it be that people with darker skin colors have less access to cutting edge medical care and research studies? Would be pretty racist.

    There is a similar bias in medical literature for genders. Many studies only consider males. That is sexist.

  • Think more about the intended audience.

    This isn't about melanoma. The media has been pushing yellow journalism like this regarding AI since it became big.

    It's similar to how right wing media would push headlines about immigrant invasions. Hating on AI is the left's version of illegal immigrants.

    Reading the article, it seems like badly regulated procurement processes with a company that did not meet the criteria to begin with.

    Poor results on people with darker skin colour are a known issue. However, the article says its training data containes ONLY white patients. The issue is not hate against AI, it's about what the tools can do with obviously problematic data.

    Unless the article is lying, these are valid concerns that have nothing to do with hating on AI, it has all to do with the minimal requirements for health AI tools.

  • Why is there a lack of robust training data across skin colors? Could it be that people with darker skin colors have less access to cutting edge medical care and research studies? Would be pretty racist.

    There is a similar bias in medical literature for genders. Many studies only consider males. That is sexist.

    I never said that the data gathered over decades wasn't biased in some way towards racial prejudice, discrimination, or social/cultural norms over history. I am quite aware of those things.

    But if a majority of the data you have at your disposal is from fair skinned people, and that's all you have...using it is not racist.

    Would you prefer that no data was used, or that we wait until the spectrum of people are fully represented in sufficient quantities, or that they make up stuff?

    This is what they have. Calling them racist for trying to help and create something to speed up diagnosis helps ALL people.

    The creators of this AI screening tool do not have any power over how the data was collected. They're not racist and it's quite ignorant to reason that they are.

  • Why is there a lack of robust training data across skin colors? Could it be that people with darker skin colors have less access to cutting edge medical care and research studies? Would be pretty racist.

    There is a similar bias in medical literature for genders. Many studies only consider males. That is sexist.

    Yeah, it does make it racist, but which party is performing the racist act? The AI, the AI trainer, the data collector, or the system that prioritises white patients? That's the important distinction that simply calling it racist fails to address.

  • Why is there a lack of robust training data across skin colors? Could it be that people with darker skin colors have less access to cutting edge medical care and research studies? Would be pretty racist.

    There is a similar bias in medical literature for genders. Many studies only consider males. That is sexist.

    My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that? To me, to say something is racist implies intent, where this situation could be that, but it could also be a case where it's just not racially diverse, which doesn't necessarily imply racism.

    There's a plethora of reasons that the dataset may be mostly fair skinned. To prattle off a couple that come to mind (all of this may be known, idk, these are ignorant possibilities on my side) perhaps more fair skinned people are susceptible so there's more data, like you mentioned that dark skinned individuals may have less options to get medical help, or maybe the dataset came from a region with not many dark skinned patients. Again, all ignorant speculation on my part, but I would say that none of those options inherently make the model racist, just not a good model. Maybe racist actions led to a bad dataset, but if that's out of the devs control, then I wouldn't personally put that negative on the model.

    Also, my interpretation of what racist means may differ, so there's that too. Or it could have all been done intentionally in which case, yea racist 100%

    Edit: I actually read the article. It sounds like they used public datasets that did have mostly Caucasian people. They also acknowledged that fair skinned people are significantly more likely to get melanoma, which does give some credence to the unbalanced dataset. It's still not ideal, but I would also say that maybe nobody should put all of their eggs in an AI screening tool, especially for something like cancer.

  • Reading the article, it seems like badly regulated procurement processes with a company that did not meet the criteria to begin with.

    Poor results on people with darker skin colour are a known issue. However, the article says its training data containes ONLY white patients. The issue is not hate against AI, it's about what the tools can do with obviously problematic data.

    Unless the article is lying, these are valid concerns that have nothing to do with hating on AI, it has all to do with the minimal requirements for health AI tools.

    Do you think any of these articles are lying or that these are not intended to generate certain sentiments towards immigrants?

    Are they valid concerns to be aware of?

    The reason I'm asking is because could you not say the same about any of these articles even though we all know exactly what the NY Post is doing?

    Compare it to posts on Lemmy with AI topics. They're the same.

  • My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that? To me, to say something is racist implies intent, where this situation could be that, but it could also be a case where it's just not racially diverse, which doesn't necessarily imply racism.

    There's a plethora of reasons that the dataset may be mostly fair skinned. To prattle off a couple that come to mind (all of this may be known, idk, these are ignorant possibilities on my side) perhaps more fair skinned people are susceptible so there's more data, like you mentioned that dark skinned individuals may have less options to get medical help, or maybe the dataset came from a region with not many dark skinned patients. Again, all ignorant speculation on my part, but I would say that none of those options inherently make the model racist, just not a good model. Maybe racist actions led to a bad dataset, but if that's out of the devs control, then I wouldn't personally put that negative on the model.

    Also, my interpretation of what racist means may differ, so there's that too. Or it could have all been done intentionally in which case, yea racist 100%

    Edit: I actually read the article. It sounds like they used public datasets that did have mostly Caucasian people. They also acknowledged that fair skinned people are significantly more likely to get melanoma, which does give some credence to the unbalanced dataset. It's still not ideal, but I would also say that maybe nobody should put all of their eggs in an AI screening tool, especially for something like cancer.

    My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that?

    A lot of AI research in general was first done by largely Caucasian students, so the datasets they used skewed that way, and other projects very often started from those initial datasets. The historical reason there are more students of that skin tone is because they have in general the most money to finance the schooling, and that's because past racism held African-American families back from accumulating wealth and accessing education, and that still affects their finances and chances today, assuming there is no racism still going on in scholarships and accepting students these days.

    Not saying this is specifically happening for this project, just a lot of AI projects in general. It causes issues with facial recognition in lots of apps for example.

  • My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that? To me, to say something is racist implies intent, where this situation could be that, but it could also be a case where it's just not racially diverse, which doesn't necessarily imply racism.

    There's a plethora of reasons that the dataset may be mostly fair skinned. To prattle off a couple that come to mind (all of this may be known, idk, these are ignorant possibilities on my side) perhaps more fair skinned people are susceptible so there's more data, like you mentioned that dark skinned individuals may have less options to get medical help, or maybe the dataset came from a region with not many dark skinned patients. Again, all ignorant speculation on my part, but I would say that none of those options inherently make the model racist, just not a good model. Maybe racist actions led to a bad dataset, but if that's out of the devs control, then I wouldn't personally put that negative on the model.

    Also, my interpretation of what racist means may differ, so there's that too. Or it could have all been done intentionally in which case, yea racist 100%

    Edit: I actually read the article. It sounds like they used public datasets that did have mostly Caucasian people. They also acknowledged that fair skinned people are significantly more likely to get melanoma, which does give some credence to the unbalanced dataset. It's still not ideal, but I would also say that maybe nobody should put all of their eggs in an AI screening tool, especially for something like cancer.

    Seems more like a byproduct of racism than racist in and of itself.

  • My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that?

    A lot of AI research in general was first done by largely Caucasian students, so the datasets they used skewed that way, and other projects very often started from those initial datasets. The historical reason there are more students of that skin tone is because they have in general the most money to finance the schooling, and that's because past racism held African-American families back from accumulating wealth and accessing education, and that still affects their finances and chances today, assuming there is no racism still going on in scholarships and accepting students these days.

    Not saying this is specifically happening for this project, just a lot of AI projects in general. It causes issues with facial recognition in lots of apps for example.

    They did touch on the facial recognition aspect as well. My main thing is, does that make the model racist if the source data is diverse? I'd argue that it's not, although racist decisions may have lead to a poor dataset.

  • Seems more like a byproduct of racism than racist in and of itself.

    I think that's a very possible likely hood, but as with most things, there are other factors that could affect the dataset as well.

  • It's still not racism. The article itself says there is a lack of diversity in the training data. Training data will consist of 100% "obvious" pictures of skin cancers which is most books and online images I've looked into seems to be majority fair skinned individuals.

    "...such algorithms perform worse on black people, which is not due to technical problems, but to a lack of diversity in the training data..."

    Calling out things as racist really works to mask what a useful tool this could be to help screen for skin cancers.

    Training data will consist of 100% "obvious" pictures of skin cancers

    Only if you're using shitty training data

  • Seems more like a byproduct of racism than racist in and of itself.

    Yes, we call that "structural racism".

  • The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its "poor” and “dangerous" results. The algorithm has been trained only with data from white patients.

    It's, going to erase me?

  • My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that? To me, to say something is racist implies intent, where this situation could be that, but it could also be a case where it's just not racially diverse, which doesn't necessarily imply racism.

    There's a plethora of reasons that the dataset may be mostly fair skinned. To prattle off a couple that come to mind (all of this may be known, idk, these are ignorant possibilities on my side) perhaps more fair skinned people are susceptible so there's more data, like you mentioned that dark skinned individuals may have less options to get medical help, or maybe the dataset came from a region with not many dark skinned patients. Again, all ignorant speculation on my part, but I would say that none of those options inherently make the model racist, just not a good model. Maybe racist actions led to a bad dataset, but if that's out of the devs control, then I wouldn't personally put that negative on the model.

    Also, my interpretation of what racist means may differ, so there's that too. Or it could have all been done intentionally in which case, yea racist 100%

    Edit: I actually read the article. It sounds like they used public datasets that did have mostly Caucasian people. They also acknowledged that fair skinned people are significantly more likely to get melanoma, which does give some credence to the unbalanced dataset. It's still not ideal, but I would also say that maybe nobody should put all of their eggs in an AI screening tool, especially for something like cancer.

    There is a more specific word for it: Institutional racism.

    Institutional racism, also known as systemic racism, is a form of institutional discrimination based on race or ethnic group and can include policies and practices that exist throughout a whole society or organization that result in and support a continued unfair advantage to some people and unfair or harmful treatment of others. It manifests as discrimination in areas such as criminal justice, employment, housing, healthcare, education and political representation.[1]

  • Yeah, it does make it racist, but which party is performing the racist act? The AI, the AI trainer, the data collector, or the system that prioritises white patients? That's the important distinction that simply calling it racist fails to address.

    There is a more specific word for it: Institutional racism.

    Institutional racism, also known as systemic racism, is a form of institutional discrimination based on race or ethnic group and can include policies and practices that exist throughout a whole society or organization that result in and support a continued unfair advantage to some people and unfair or harmful treatment of others. It manifests as discrimination in areas such as criminal justice, employment, housing, healthcare, education and political representation.[1]

  • 33 Stimmen
    1 Beiträge
    2 Aufrufe
    Niemand hat geantwortet
  • Is Matrix cooked?

    Technology technology
    54
    100 Stimmen
    54 Beiträge
    21 Aufrufe
    W
    Didn't know it only applied to UWP apps on Windows. That does seem like a pretty big problem then. it is mostly for compatibility reasons. no win32 programs are equipped to handle such granular permissions and sandboxing, they are all made with the assumption that they have access to whatever they need (other than other users' resources and things that require elevation). if Microsoft would have made that limitation to every kind of software, that Windows version would have probably been a failure in popularity because lots of software would have broken. I think S editions of windows is how they tried to go in that direction, with a more drastic way of simply just dropping support for 3rd party win32 programs. I don't still have a Mac readily available to test with but afaik it is any application that uses Apple's packaging format. ok, so if you run linux or windows utils in a compatibility layer, they still have less of a limited access? by which I mean graphical utilities. just tried with firefox, for macos it wanted to give me an .iso file (???) if so, it seems apple is doing roughly the same as microsoft with uwp and the appx format, and linux with flatpak: it's a choice for the user
  • XMPP vs everything else

    Technology technology
    10
    1
    43 Stimmen
    10 Beiträge
    6 Aufrufe
    M
    Conversely, I have seen this opinion expressed a few times. I can’t judge the accuracy but there seem to be more than a few people sharing it.
  • 119 Stimmen
    10 Beiträge
    19 Aufrufe
    S
    Active ISA would be a disaster. My fairly modern car is unable to reliably detect posted or implied speed limits. Sometimes it overshoots by more than double and sometimes it mandates more than 3/4 slower. The problem is the way it is and will have to be done is by means of optical detection. GPS speed measurement can also be surprisingly unreliable. Especially in underground settings like long pass-unders and tunnels. If the system would be based on something reliable like local wireless communications between speed limit postings it would be a different issue - would also come with a significant risc of abuse though. Also the passive ISA was the first thing I disabled. And I abide by posted speed limits.
  • 33 Stimmen
    2 Beiträge
    14 Aufrufe
    rooki@lemmy.worldR
    Woah in 2 years, that will be definitly not be forgotten until then....
  • 21 Stimmen
    6 Beiträge
    16 Aufrufe
    sentient_loom@sh.itjust.worksS
    I want to read his "Meaning of the City" because I just like City theory, but I keep postponing in case it's just Christian morality lessons. The anarchist Christian angle makes this sound more interesting.
  • 92 Stimmen
    42 Beiträge
    14 Aufrufe
    G
    You don’t understand. The tracking and spying is the entire point of the maneuver. The ‘children are accessing porn’ thing is just a Trojan horse to justify the spying. I understand what are you saying, I simply don't consider to check if a law is applied as a Trojan horse in itself. I would agree if the EU had said to these sites "give us all the the access log, a list of your subscriber, every data you gather and a list of every IP it ever connected to your site", and even this way does not imply that with only the IP you could know who the user is without even asking the telecom company for help. So, is it a Trojan horse ? Maybe, it heavily depend on how the EU want to do it. If they just ask "show me how you try to avoid that a minor access your material", which normally is the fist step, I don't see how it could be a Trojan horse. It could become, I agree on that. As you pointed out, it’s already illegal for them to access it, and parents are legally required to prevent their children from accessing it. No, parents are not legally required to prevent it. The seller (or provider) is legally required. It is a subtle but important difference. But you don’t lock down the entire population, or institute pre-crime surveillance policies, just because some parents are not going to follow the law. True. You simply impose laws that make mandatories for the provider to check if he can sell/serve something to someone. I mean asking that the cashier of mall check if I am an adult when I buy a bottle of wine is no different than asking to Pornhub to check if the viewer is an adult. I agree that in one case is really simple and in the other is really hard (and it is becoming harder by the day). You then charge the guilty parents after the offense. Ok, it would work, but then how do you caught the offendind parents if not checking what everyone do ? Is it not simpler to try to prevent it instead ?
  • signal blogpost on windows recall

    Technology technology
    5
    1
    69 Stimmen
    5 Beiträge
    12 Aufrufe
    P
    I wouldn't trust windows to follow their don't screenshot API, whether out of ignorance or malice.