Skip to content

Mole or cancer? The algorithm that gets one in three melanomas wrong and erases patients with dark skin

Technology
39 21 0
  • I never said that the data gathered over decades wasn't biased in some way towards racial prejudice, discrimination, or social/cultural norms over history. I am quite aware of those things.

    But if a majority of the data you have at your disposal is from fair skinned people, and that's all you have...using it is not racist.

    Would you prefer that no data was used, or that we wait until the spectrum of people are fully represented in sufficient quantities, or that they make up stuff?

    This is what they have. Calling them racist for trying to help and create something to speed up diagnosis helps ALL people.

    The creators of this AI screening tool do not have any power over how the data was collected. They're not racist and it's quite ignorant to reason that they are.

    They absolutely have power over the data sets.

    They could also fund research into other cancers and work with other countries like ones in Africa where there are more black people to sample.

    It's impossible to know intent but it does seem pretty intentionally eugenics of them to do this when it has been widely criticized and they refuse to fix it. So I'd say it is explicitly racist.

  • Though I get the point, I would caution against calling "racism!" on AI not being able to detect molea or cancers well on people with darker skin; its harder to see darker areas on darker skins. That is physics, not racism

    if only you read more than three sentences you'd see the problem is with the training data. instead you chose to make sure no one said the R word. ben shapiro would be proud

  • They absolutely have power over the data sets.

    They could also fund research into other cancers and work with other countries like ones in Africa where there are more black people to sample.

    It's impossible to know intent but it does seem pretty intentionally eugenics of them to do this when it has been widely criticized and they refuse to fix it. So I'd say it is explicitly racist.

    Eugenics??? That's crazy.

    So you'd prefer that they don't even start working with this screening method until we have gathered enough data to satisfy everyones representation?

    Let's just do that and not do anything until everyone is happy. Nothing will happen ever and we will all collectively suffer.

    How about this. Let's let the people with the knowledge use this "racist" data and help move the bar for health forward for everyone.

  • Eugenics??? That's crazy.

    So you'd prefer that they don't even start working with this screening method until we have gathered enough data to satisfy everyones representation?

    Let's just do that and not do anything until everyone is happy. Nothing will happen ever and we will all collectively suffer.

    How about this. Let's let the people with the knowledge use this "racist" data and help move the bar for health forward for everyone.

    It isn't crazy and it's the basis for bioethics, something I had to learn about when becoming a bioengineer who also worked with people who literally designed AI today and they continue to work with MIT, Google, and Stanford on machine learning... I have spoked extensively with these people about ethics and a large portion of any AI engineer's job is literally just ethics. Actually, a lot of engineering is learning ethics and accidents - they go hand in hand, like the Hotel Hyatt collapse.

    I never suggested they stop developing the screening technology, don't strawman, it's boring. I literally gave suggestions for how they can fix it and fix their data so it is no longer functioning as a tool of eugenics.

    Different case below, but related sentiment that AI is NOT a separate entity from its creators/engineers and they ABSOLUTELY should be held liable for the outcomes of what they engineer regardless of provable intent.

    You don’t think the people who make the generative algorithm have a duty to what it generates?

    And whatever you think anyway, the company itself shows that it feels obligated about what the AI puts out, because they are constantly trying to stop the AI from giving out bomb instructions and hate speech and illegal sexual content.

    The standard is not and was never if they were “entirely” at fault here. It’s whether they have any responsibility towards this (and we all here can see that they do indeed have some), and how much financially that’s worth in damages.

  • The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its "poor” and “dangerous" results. The algorithm has been trained only with data from white patients.

    you cant diagnosed melanoma just by the skin features alone, you need biopsy and gene tic testing too. furthermore, other types of melanoma do not have typical abcde signs sometimes.

    histopathology gives the accurate indication if its melonoma or something else, and how far it spread in the sample.

  • It isn't crazy and it's the basis for bioethics, something I had to learn about when becoming a bioengineer who also worked with people who literally designed AI today and they continue to work with MIT, Google, and Stanford on machine learning... I have spoked extensively with these people about ethics and a large portion of any AI engineer's job is literally just ethics. Actually, a lot of engineering is learning ethics and accidents - they go hand in hand, like the Hotel Hyatt collapse.

    I never suggested they stop developing the screening technology, don't strawman, it's boring. I literally gave suggestions for how they can fix it and fix their data so it is no longer functioning as a tool of eugenics.

    Different case below, but related sentiment that AI is NOT a separate entity from its creators/engineers and they ABSOLUTELY should be held liable for the outcomes of what they engineer regardless of provable intent.

    You don’t think the people who make the generative algorithm have a duty to what it generates?

    And whatever you think anyway, the company itself shows that it feels obligated about what the AI puts out, because they are constantly trying to stop the AI from giving out bomb instructions and hate speech and illegal sexual content.

    The standard is not and was never if they were “entirely” at fault here. It’s whether they have any responsibility towards this (and we all here can see that they do indeed have some), and how much financially that’s worth in damages.

    I know what bioethics is and how it applies to research and engineering. Your response doesn't seem to really get to the core of what I'm saying: which is that the people making the AI tool aren't racist.

    Help me out: what do the researchers creating this AI screening tool in its current form (with racist data) have to do with it being a tool of eugenics? That's quite a damning statement.

    I'm assuming you have a much deeper understanding of what kind of data this AI screening tool uses and the finances and whatever else that goes into it. I feel that the whole "talk with Africa" to balance out the data is not great sounding and is overly simplified.

    Do you really believe that the people who created this AI screening tool should be punished for using this racist data, regardless of provable intent? Even if it saved lives?

    Does this kind of punishment apply to the doctor who used this unethical AI tool? His knowledge has to go into building it up somehow. Is he, by extension, a tool of eugenics too?

    I understand ethical obligations and that we need higher standards moving forward in society. But even if the data right now is unethical, and it saves lives, we should absolutely use it.

  • I know what bioethics is and how it applies to research and engineering. Your response doesn't seem to really get to the core of what I'm saying: which is that the people making the AI tool aren't racist.

    Help me out: what do the researchers creating this AI screening tool in its current form (with racist data) have to do with it being a tool of eugenics? That's quite a damning statement.

    I'm assuming you have a much deeper understanding of what kind of data this AI screening tool uses and the finances and whatever else that goes into it. I feel that the whole "talk with Africa" to balance out the data is not great sounding and is overly simplified.

    Do you really believe that the people who created this AI screening tool should be punished for using this racist data, regardless of provable intent? Even if it saved lives?

    Does this kind of punishment apply to the doctor who used this unethical AI tool? His knowledge has to go into building it up somehow. Is he, by extension, a tool of eugenics too?

    I understand ethical obligations and that we need higher standards moving forward in society. But even if the data right now is unethical, and it saves lives, we should absolutely use it.

    I addressed that point by saying their intent to be racist or not is irrelevant when we focus on impact to the actual victims (ie systemic racism). Who cares about the individual engineer's morality and thoughts when we have provable, measurable evidence of racial disparity that we can correct easily?

    It literally allows black people to die and saves white people more. That's eugenics.

    It is fine to coordinate with universities in like Kenya, what are you talking about?

    I never said shit about the makers of THIS tool being punished! Learn to read! I said the tool needs fixed!

    Like seriously you are constantly taking the position of the white male, empathizing, then running interference for him as if he was you and as if I'm your mommy about to spank you. Stop being weird and projecting your bullshit.

    Yes, doctors who use this tool on their black patients and white patients equally would be perofmring eugenics, just like the doctors who sterikized indigenous women because they were poor were doing the same. Again, intent and your ego isn't relevanf when we focus on impacts to victims and how to help them.

    We should demand they work in a very meaningful way to get the data to be as good for black people as their #1 priority, ie doing studies and collecting that data

  • I addressed that point by saying their intent to be racist or not is irrelevant when we focus on impact to the actual victims (ie systemic racism). Who cares about the individual engineer's morality and thoughts when we have provable, measurable evidence of racial disparity that we can correct easily?

    It literally allows black people to die and saves white people more. That's eugenics.

    It is fine to coordinate with universities in like Kenya, what are you talking about?

    I never said shit about the makers of THIS tool being punished! Learn to read! I said the tool needs fixed!

    Like seriously you are constantly taking the position of the white male, empathizing, then running interference for him as if he was you and as if I'm your mommy about to spank you. Stop being weird and projecting your bullshit.

    Yes, doctors who use this tool on their black patients and white patients equally would be perofmring eugenics, just like the doctors who sterikized indigenous women because they were poor were doing the same. Again, intent and your ego isn't relevanf when we focus on impacts to victims and how to help them.

    We should demand they work in a very meaningful way to get the data to be as good for black people as their #1 priority, ie doing studies and collecting that data

    Define eugenics for me, please.

    You're saying the tool in its current form with it's data "seems pretty intentionally eugenics" and..."a tool for eugenics". And since you said the people who made that data, the AI tool, and those who are now using it are also responsible for anything bad ...they are by your supposed extension eugenicists/racists and whatever other grotesque and immoral thing you can think of. Because your link says that regardless of intention, the AI engineers should ABSOLUTELY be punished.

    They have to fix it, of course, so it can become something other than a tool for eugenics as it is currently. Can you see where I think your argument goes way beyond rational?

    Would I have had this conversation with you if the tool worked really well on only black people and allowed white people to die disproportionately? I honestly can't say. But I feel you would be quiet on the issue. Am I wrong?

    I don't think using the data, as it is, to save lives makes you racist or supports eugenics. You seem to believe it does. That's what I'm getting after. That's why I think we are reading different books.

    Once again...define eugenics for me, please.

    Regardless, nothing I have said means that I don't recognize institutional racism and that I don't want the data set to become more evenly distributed so it takes into consideration the full spectrum of human life and helps ALL people.

  • Define eugenics for me, please.

    You're saying the tool in its current form with it's data "seems pretty intentionally eugenics" and..."a tool for eugenics". And since you said the people who made that data, the AI tool, and those who are now using it are also responsible for anything bad ...they are by your supposed extension eugenicists/racists and whatever other grotesque and immoral thing you can think of. Because your link says that regardless of intention, the AI engineers should ABSOLUTELY be punished.

    They have to fix it, of course, so it can become something other than a tool for eugenics as it is currently. Can you see where I think your argument goes way beyond rational?

    Would I have had this conversation with you if the tool worked really well on only black people and allowed white people to die disproportionately? I honestly can't say. But I feel you would be quiet on the issue. Am I wrong?

    I don't think using the data, as it is, to save lives makes you racist or supports eugenics. You seem to believe it does. That's what I'm getting after. That's why I think we are reading different books.

    Once again...define eugenics for me, please.

    Regardless, nothing I have said means that I don't recognize institutional racism and that I don't want the data set to become more evenly distributed so it takes into consideration the full spectrum of human life and helps ALL people.

    Yeah I'm done educating you tbh. Not worth my time when you're arguing in bad faith.

    Learn what a strawman is. 90% of your post was strawman after strawman.

    Define strawman for me, kiddo. Then re-read your above comment. I counted 6, can you find all 6 strawman arguments in your comment?

    The conversation was never about you or your ego, but youve thoroughly convinced me with this conversation that you are probably both racist and a eugenicist - hit dog hollers and you seriously keep identifying yourself as the racist eugenicist here with no prompting from anyone else. Ig if that's who you are then, whatever. I dont talk to eugenicist racists either.

  • Yeah I'm done educating you tbh. Not worth my time when you're arguing in bad faith.

    Learn what a strawman is. 90% of your post was strawman after strawman.

    Define strawman for me, kiddo. Then re-read your above comment. I counted 6, can you find all 6 strawman arguments in your comment?

    The conversation was never about you or your ego, but youve thoroughly convinced me with this conversation that you are probably both racist and a eugenicist - hit dog hollers and you seriously keep identifying yourself as the racist eugenicist here with no prompting from anyone else. Ig if that's who you are then, whatever. I dont talk to eugenicist racists either.

    I expected more from an educated person.

    But if you don't want to define the word and cut off the conversation, then you've just left me with the belief you are using eugenics as a "scary" word hoping to sound smart. I believe you can represent your field better.

    I hope you have a good one.

    For anybody still reading: The AI tool is not for eugenics, the researchers should not be punished, it's not racist to use unethical data, and it helps people who might otherwise die to a horrible disease. It doesn't help all the people we want it to right now, but hopefully, in the future it will be an amazing tool for everyone.

  • I expected more from an educated person.

    But if you don't want to define the word and cut off the conversation, then you've just left me with the belief you are using eugenics as a "scary" word hoping to sound smart. I believe you can represent your field better.

    I hope you have a good one.

    For anybody still reading: The AI tool is not for eugenics, the researchers should not be punished, it's not racist to use unethical data, and it helps people who might otherwise die to a horrible disease. It doesn't help all the people we want it to right now, but hopefully, in the future it will be an amazing tool for everyone.

    Lol

    Sure, if you declare it, it must be so. Other people can read and can see your strawmans. You just look pompous and egotistical.

  • It's still not racism. The article itself says there is a lack of diversity in the training data. Training data will consist of 100% "obvious" pictures of skin cancers which is most books and online images I've looked into seems to be majority fair skinned individuals.

    "...such algorithms perform worse on black people, which is not due to technical problems, but to a lack of diversity in the training data..."

    Calling out things as racist really works to mask what a useful tool this could be to help screen for skin cancers.

    it isn't racism it is [describes racism]

  • Who said that?

    i only see a blank comment.

  • 34 Stimmen
    1 Beiträge
    4 Aufrufe
    Niemand hat geantwortet
  • best Head Shop Online

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    2 Aufrufe
    Niemand hat geantwortet
  • 229 Stimmen
    10 Beiträge
    26 Aufrufe
    S
    The result now is that no website will load because the rest of the world will have broadband anyway
  • 942 Stimmen
    196 Beiträge
    76 Aufrufe
    M
    In the end I popped up the terminal and used some pot command with some flag I can't remember to skip the login step on setup. I reckon there is good chance you aren't using windows 11 home though right?
  • 1k Stimmen
    95 Beiträge
    15 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • 119 Stimmen
    10 Beiträge
    22 Aufrufe
    S
    Active ISA would be a disaster. My fairly modern car is unable to reliably detect posted or implied speed limits. Sometimes it overshoots by more than double and sometimes it mandates more than 3/4 slower. The problem is the way it is and will have to be done is by means of optical detection. GPS speed measurement can also be surprisingly unreliable. Especially in underground settings like long pass-unders and tunnels. If the system would be based on something reliable like local wireless communications between speed limit postings it would be a different issue - would also come with a significant risc of abuse though. Also the passive ISA was the first thing I disabled. And I abide by posted speed limits.
  • The world could experience a year above 2°C of warming by 2029

    Technology technology
    17
    1
    200 Stimmen
    17 Beiträge
    41 Aufrufe
    sattarip@lemmy.blahaj.zoneS
    Thank you for the clarification.
  • 0 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet