Skip to content

Paul McCartney and Dua Lipa urge UK Prime Minister to rethink his AI copyright plans. A new law could soon allow AI companies to use copyrighted material without permission.

Technology
72 45 0
  • I don't disagree with you, but AI companies shouldn't get an exclusive free pass.

    Oh yes, I am not saying that at all. I am still very unsure on my views of AI from a precautionary standpoint and I think that its commercial use will lead to more harm than good but if these things are the closest analogs we have to looking at how humans learn and create it shows IP is ridiculous- I mean we do not even need them to see this, if an idea was purely and solely one person's property the idea of someone from the sentinel island (assuming they have not left and learnt oncology) inventing the cure for brain cancer is as likely as a team of oncologists at Oxford doing it.

  • That feels categorically different unless AI has legal standing as a person. We're talking about training LLMs, there's not anything more than people using computers going on here.

    So then anyone who uses a computer to make music would be in violation?

    Or is it some amount of computer generated content? How many notes? If its not a sample of a song, how does one know how much of those notes are attributed to which artist being stolen from?

    What if I have someone else listen to a song and they generate a few bars of a song for me? Is it different that a computer listened and then generated output?

    To me it sounds like artists were open to some types of violations but not others. If an AI model listened to the radio most of these issues go away unless we are saying that humans who listen to music and write similar songs are OK but people who write music using computers who calculate the statistically most common song are breaking the law.

  • It's not a Ponzi scheme. Just because you don't like it doesn't mean it's a scam and even if it was a scam that wouldn't be the type of scam that it was.

    Absolute worst you could call it is false advertising, because AI does actually work just not very well.

    A company that makes negative income every quarter forever, and whose latest edition costs a magnitude more power and is worse than the previous, is worth between $150 Bn and $300 Bn. Many other competing companies equally overvalued.

    These are businesses who are only valuable because people keep investing in them. A Ponzi Scheme.

  • If AI companies can pirate, so can individuals.

    Is the ai doing anything that isn’t already allowed for humans. The thing is, generative ai doesn’t copy someone’s art. It’s more akin to learning from someone’s art and creating you own art with that influence. Given that we want to continue allowing hunans access to art for learning, what’s the logical difference to an ai doing the same?

    Did this already play out at Reddit? Ai was one of the reasons I left but I believe it’s a different scenario. I freely contributed my content to Reddit for the purposes of building an interactive community, but they changed the terms without my consent. I did NOT contribute my content so they could make money selling it for ai training

    The only logical distinction I see with s ai aren’t human: an exception for humans does not apply to non-humans even if the activity is similar

  • This post did not contain any content.

    Most of us make fun of the stupid everyday masses for supporting laws that only benefit people who are vastly richer than they'll ever be. But I'm almost guaranteed to get douchevoted for pointing out that the vast majority of musicians never get famous, never get recording contracts, but make their living day to day playing little gigs wherever they can find them. They don't materially suffer if AI includes patterns from their creations in its output, because they don't get any revenue streams from it to begin with. Realistically they're the people most of us should identify with, but instead we rally behind the likes of Paul McCartney and Elton John as if they represent us. McCartney's a billionaire and Elton's more than halfway there - they both own recording companies ffs. If you're going to do simple meme-brained thinking and put black or white hats on people, at least get the hats right.

  • A company that makes negative income every quarter forever, and whose latest edition costs a magnitude more power and is worse than the previous, is worth between $150 Bn and $300 Bn. Many other competing companies equally overvalued.

    These are businesses who are only valuable because people keep investing in them. A Ponzi Scheme.

    AI has been around for many years, and a lot has happened in that time. It's had periods of high and low interest, and during its lows, it's been dubbed AI winter.

  • AI has been around for many years, and a lot has happened in that time. It's had periods of high and low interest, and during its lows, it's been dubbed AI winter.

    And the current AI spring is just old people buying into bullshit marketing and putting all their money in a Ponze Scheme.

  • And the current AI spring is just old people buying into bullshit marketing and putting all their money in a Ponze Scheme.

    Throughout history, many things have been spent on useless things, but saying that AI is a Ponze scheme is, I feel, the same as saying that the Apollo program is a Ponze scheme or that government-funded research is another Ponze scheme.

    PS: There were people who were against the Apollo program because they considered it an unnecessary expense, although today the Apollo program is more remembered.

  • Throughout history, many things have been spent on useless things, but saying that AI is a Ponze scheme is, I feel, the same as saying that the Apollo program is a Ponze scheme or that government-funded research is another Ponze scheme.

    PS: There were people who were against the Apollo program because they considered it an unnecessary expense, although today the Apollo program is more remembered.

    The Apollo Program had an achievable goal, lots of beneficial byproducts, and was administrated by public offices.

    Nothing about it is comparable. It's like saying people hate snakes and some people also hate dogs therefor snakes are dogs.

  • This post did not contain any content.

    What is the actual justification for this? Everyone has to pay for this except for AI companies, so AI can continue to develop into a universally regarded negative?

  • What is the actual justification for this? Everyone has to pay for this except for AI companies, so AI can continue to develop into a universally regarded negative?

    why do you say AI is a universally regarded negative?

    Edit: if you're going to downvote me, can you explain why? I am not saying AI is a good thing here. I'm just asking for evidence that it's universally disliked, i.e. there aren't a lot of fans. It seems there are lots of people coming to the defense of AI in this thread, so it clearly isn't universally disliked.

  • Is the ai doing anything that isn’t already allowed for humans. The thing is, generative ai doesn’t copy someone’s art. It’s more akin to learning from someone’s art and creating you own art with that influence. Given that we want to continue allowing hunans access to art for learning, what’s the logical difference to an ai doing the same?

    Did this already play out at Reddit? Ai was one of the reasons I left but I believe it’s a different scenario. I freely contributed my content to Reddit for the purposes of building an interactive community, but they changed the terms without my consent. I did NOT contribute my content so they could make money selling it for ai training

    The only logical distinction I see with s ai aren’t human: an exception for humans does not apply to non-humans even if the activity is similar

    You picked the wrong thread for a nuanced question on a controversial topic.

    But it seems the UK indeed has laws for this already if the article is to believed, as they don't currently allow AI companies to train on copyrighted material (As per the article). As far as I know, in some other jurisdictions, a normal person would absolutely be allowed to pull a bunch of publicly available information, learn from it, and decide to make something new based on objective information that can be found within. And generally, that's the rationale AI companies used as well, seeing as there have been landmark cases ruled in the past to not be copyright infringement with wide acceptance for computers analyzing copyrighted information, such as against Google, for indexing copyrighted material in their search results. But perhaps an adjacent ruling was never accepted in the UK (which does seem strange, as Google does operate there). But laws are messy, and perhaps there is an exception somewhere, and I'm certainly not an expert on UK law.

    But people sadly don't really come into this thread to discuss the actual details, they just see a headline that invokes a feeling of "AI Bad", and so you coming in here with a reasonable question makes you a target. I wholly expect to be downvoted as well.

  • Is the ai doing anything that isn’t already allowed for humans. The thing is, generative ai doesn’t copy someone’s art. It’s more akin to learning from someone’s art and creating you own art with that influence. Given that we want to continue allowing hunans access to art for learning, what’s the logical difference to an ai doing the same?

    Did this already play out at Reddit? Ai was one of the reasons I left but I believe it’s a different scenario. I freely contributed my content to Reddit for the purposes of building an interactive community, but they changed the terms without my consent. I did NOT contribute my content so they could make money selling it for ai training

    The only logical distinction I see with s ai aren’t human: an exception for humans does not apply to non-humans even if the activity is similar

    Is the ai doing anything that isn’t already allowed for humans. The thing is, generative ai doesn’t copy someone’s art. It’s more akin to learning from someone’s art and creating you own art with that influence. Given that we want to continue allowing hunans access to art for learning, what’s the logical difference to an ai doing the same?

    AI stans always say stuff like this, but it doesn't make sense to me at all.

    AI does not learn the same way that a human does: it has no senses of its own with which to observe the world or art, it has no lived experiences, it has no agency, preferences or subjectivity, and it has no real intelligence with which to interpret or understand the work that it is copying from. AI is simply a matrix of weights that has arbitrary data superimposed on it by people and companies.

    Are you an artist or a creative person?

    If you are then you must know that the things you create are certainly indirectly influenced by SOME of the things that you have experienced (be it walking around on a sunny day, your favorite scene from your favorite movie, the lyrics of a song, etc.), AS WELL AS your own unique and creative persona, your own ideas, your own philosophy, and your own personal development.

    Look at how an artist creates a painting and compare it to how generative AI creates a painting. Similarly, look at how artists train and learn their craft and compare it to how generative AI models are trained. It's an apples-to-oranges comparison. Outside of the marketing labels of "artificial intelligence" and "machine learning", it's nothing like real intelligence or learning at all.

    (And that's still ignoring the obvious corporate element and the four pillars of fair use consideration (US law, not UK, mind you). For example, the potential market effects of generating an automated system which uses people's artwork to directly compete against them.)

  • You picked the wrong thread for a nuanced question on a controversial topic.

    But it seems the UK indeed has laws for this already if the article is to believed, as they don't currently allow AI companies to train on copyrighted material (As per the article). As far as I know, in some other jurisdictions, a normal person would absolutely be allowed to pull a bunch of publicly available information, learn from it, and decide to make something new based on objective information that can be found within. And generally, that's the rationale AI companies used as well, seeing as there have been landmark cases ruled in the past to not be copyright infringement with wide acceptance for computers analyzing copyrighted information, such as against Google, for indexing copyrighted material in their search results. But perhaps an adjacent ruling was never accepted in the UK (which does seem strange, as Google does operate there). But laws are messy, and perhaps there is an exception somewhere, and I'm certainly not an expert on UK law.

    But people sadly don't really come into this thread to discuss the actual details, they just see a headline that invokes a feeling of "AI Bad", and so you coming in here with a reasonable question makes you a target. I wholly expect to be downvoted as well.

    Oh are we giving AI the same rights as humans now?
    On what grounds?

  • Oh are we giving AI the same rights as humans now?
    On what grounds?

    I never claimed that in this case. As I said in my response: There have been won lawsuits that machines are allowed to index and analyze copyrighted material without infringing on such rights, so long as they only extract objective information, such as what AI typically extracts. I'm not a lawyer, and your jurisdiction may differ, but this page has a good overview: https://blog.apify.com/is-web-scraping-legal/

    EDIT: For the US description on that page, it mentions the US case that I referred to: Author's Guild v Google

  • How many authors do you think would have written the books they did, if they weren't able to make a living from their work? Most of the people creating works before copyright either had a patron of some description, or outright worked for an organisation.

    You should read the opinion of Stephen King about that precise point. The short version: "I'd write books even if it was illegal".

  • why do you say AI is a universally regarded negative?

    Edit: if you're going to downvote me, can you explain why? I am not saying AI is a good thing here. I'm just asking for evidence that it's universally disliked, i.e. there aren't a lot of fans. It seems there are lots of people coming to the defense of AI in this thread, so it clearly isn't universally disliked.

    Because pretty much nobody wants it or likes it.

  • Because pretty much nobody wants it or likes it.

    That's just not true, chatgpt & co are hugely popular, which is a big part of the issue.

  • That's just not true, chatgpt & co are hugely popular, which is a big part of the issue.

    Nazism was hugely popular in Germany in the early 20th century, but was it a good thing?

  • This post did not contain any content.

    A new law could soon allow AI companies to use copyrighted material without permission.

    Good. Copyright and patent laws need to die.

    All the money wasted enforcing them and taken from customers could be better spent on other things.

    Creators will still create, as they always have. We just won't have millionaire scumbags such as 'paul mccartney' living like kings while children starve.

  • 8 Stimmen
    3 Beiträge
    0 Aufrufe
    I
    Reminds me of a quote from the game Alpha Centauri: I think, and my thoughts cross the barrier into the synapses of the machine, just as the good doctor intended. But what I cannot shake, and what hints at things to come, is that thoughts cross back. In my dreams, the sensibility of the machine invades the periphery of my consciousness: dark, rigid, cold, alien. Evolution is at work here, but just what is evolving remains to be seen. Commissioner Pravin Lal, “Man and Machine”
  • 512 Stimmen
    54 Beiträge
    0 Aufrufe
    E
    My cousin partially set his bedroom on fire doing something very similar with the foil from chewing gum. This was in the 1980s though so no one really cared, I'm pretty sure he just got shouted at.
  • 108 Stimmen
    3 Beiträge
    0 Aufrufe
    M
    A private company is selling cheap tablets to inmates to let them communicate with their family. They have to use "digital stamps" to send messages, 35 cents a piece and come in packs of 5, 10 or 20. Each stamp covers up to 20,000 characters or one single image. They also sell songs, at $1.99 a piece, and some people have spent thousands over the years. That's also now just going away. Then you get to the part about the new company. Who already has a system in Tennessee where inmates have to pay 3-5 cents per minute of tablet usage. Be that watching a movie they've bought or just typing a message.
  • Meta Reportedly Eyeing 'Super Sensing' Tech for Smart Glasses

    Technology technology
    4
    1
    33 Stimmen
    4 Beiträge
    0 Aufrufe
    M
    I see your point but also I just genuinely don't have a mind for that shit. Even my own close friends and family, it never pops into my head to ask about that vacation they just got back from or what their kids are up to. I rely on social cues from others, mainly my wife, to sort of kick start my brain. I just started a new job. I can't remember who said they were into fishing and who didn't, and now it's anxiety inducing to try to figure out who is who. Or they ask me a friendly question and I get caught up answering and when I'm done I forget to ask it back to them (because frequently asking someone about their weekend or kids or whatever is their way of getting to share their own life with you, but my brain doesn't think that way). I get what you're saying. It could absolutely be used for performative interactions but for some of us people drift away because we aren't good at being curious about them or remembering details like that. And also, I have to sit through awkward lunches at work where no one really knows what to talk about or ask about because outside of work we are completely alien to one another. And it's fine. It wouldn't be worth the damage it does. I have left behind all personally identifiable social media for the same reason. But I do hate how social anxiety and ADHD makes friendship so fleeting.
  • Indian Government orders censoring of accounts on X

    Technology technology
    12
    149 Stimmen
    12 Beiträge
    0 Aufrufe
    M
    Why? Because you can’t sell them?
  • 588 Stimmen
    77 Beiträge
    1 Aufrufe
    F
    When a Lemmy instance owner gets a legal request from a foreign countries government to take down content, after they’re done shitting themselves they’ll take the content down or they’ll have to implement a country wide block on that country, along with not allowing any citizens of that country to use their instance no matter where they are located. Block me, I don’t care. You’re just proving that you can’t handle the truth and being challenged with it.
  • 2 Stimmen
    8 Beiträge
    0 Aufrufe
    F
    IMO stuff like that is why a good trainer is important. IMO it's stronger evidence that proper user-centered design should be done and a usable and intuitive UX and set of APIs developed. But because the buyer of this heap of shit is some C-level, there is no incentive to actually make it usable for the unfortunate peons who are forced to interact with it. See also SFDC and every ERP solution in existence.
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    0 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.