Skip to content

Paul McCartney and Dua Lipa urge UK Prime Minister to rethink his AI copyright plans. A new law could soon allow AI companies to use copyrighted material without permission.

Technology
72 45 0
  • I don't disagree with you, but AI companies shouldn't get an exclusive free pass.

    Oh yes, I am not saying that at all. I am still very unsure on my views of AI from a precautionary standpoint and I think that its commercial use will lead to more harm than good but if these things are the closest analogs we have to looking at how humans learn and create it shows IP is ridiculous- I mean we do not even need them to see this, if an idea was purely and solely one person's property the idea of someone from the sentinel island (assuming they have not left and learnt oncology) inventing the cure for brain cancer is as likely as a team of oncologists at Oxford doing it.

  • That feels categorically different unless AI has legal standing as a person. We're talking about training LLMs, there's not anything more than people using computers going on here.

    So then anyone who uses a computer to make music would be in violation?

    Or is it some amount of computer generated content? How many notes? If its not a sample of a song, how does one know how much of those notes are attributed to which artist being stolen from?

    What if I have someone else listen to a song and they generate a few bars of a song for me? Is it different that a computer listened and then generated output?

    To me it sounds like artists were open to some types of violations but not others. If an AI model listened to the radio most of these issues go away unless we are saying that humans who listen to music and write similar songs are OK but people who write music using computers who calculate the statistically most common song are breaking the law.

  • It's not a Ponzi scheme. Just because you don't like it doesn't mean it's a scam and even if it was a scam that wouldn't be the type of scam that it was.

    Absolute worst you could call it is false advertising, because AI does actually work just not very well.

    A company that makes negative income every quarter forever, and whose latest edition costs a magnitude more power and is worse than the previous, is worth between $150 Bn and $300 Bn. Many other competing companies equally overvalued.

    These are businesses who are only valuable because people keep investing in them. A Ponzi Scheme.

  • If AI companies can pirate, so can individuals.

    Is the ai doing anything that isn’t already allowed for humans. The thing is, generative ai doesn’t copy someone’s art. It’s more akin to learning from someone’s art and creating you own art with that influence. Given that we want to continue allowing hunans access to art for learning, what’s the logical difference to an ai doing the same?

    Did this already play out at Reddit? Ai was one of the reasons I left but I believe it’s a different scenario. I freely contributed my content to Reddit for the purposes of building an interactive community, but they changed the terms without my consent. I did NOT contribute my content so they could make money selling it for ai training

    The only logical distinction I see with s ai aren’t human: an exception for humans does not apply to non-humans even if the activity is similar

  • This post did not contain any content.

    Most of us make fun of the stupid everyday masses for supporting laws that only benefit people who are vastly richer than they'll ever be. But I'm almost guaranteed to get douchevoted for pointing out that the vast majority of musicians never get famous, never get recording contracts, but make their living day to day playing little gigs wherever they can find them. They don't materially suffer if AI includes patterns from their creations in its output, because they don't get any revenue streams from it to begin with. Realistically they're the people most of us should identify with, but instead we rally behind the likes of Paul McCartney and Elton John as if they represent us. McCartney's a billionaire and Elton's more than halfway there - they both own recording companies ffs. If you're going to do simple meme-brained thinking and put black or white hats on people, at least get the hats right.

  • A company that makes negative income every quarter forever, and whose latest edition costs a magnitude more power and is worse than the previous, is worth between $150 Bn and $300 Bn. Many other competing companies equally overvalued.

    These are businesses who are only valuable because people keep investing in them. A Ponzi Scheme.

    AI has been around for many years, and a lot has happened in that time. It's had periods of high and low interest, and during its lows, it's been dubbed AI winter.

  • AI has been around for many years, and a lot has happened in that time. It's had periods of high and low interest, and during its lows, it's been dubbed AI winter.

    And the current AI spring is just old people buying into bullshit marketing and putting all their money in a Ponze Scheme.

  • And the current AI spring is just old people buying into bullshit marketing and putting all their money in a Ponze Scheme.

    Throughout history, many things have been spent on useless things, but saying that AI is a Ponze scheme is, I feel, the same as saying that the Apollo program is a Ponze scheme or that government-funded research is another Ponze scheme.

    PS: There were people who were against the Apollo program because they considered it an unnecessary expense, although today the Apollo program is more remembered.

  • Throughout history, many things have been spent on useless things, but saying that AI is a Ponze scheme is, I feel, the same as saying that the Apollo program is a Ponze scheme or that government-funded research is another Ponze scheme.

    PS: There were people who were against the Apollo program because they considered it an unnecessary expense, although today the Apollo program is more remembered.

    The Apollo Program had an achievable goal, lots of beneficial byproducts, and was administrated by public offices.

    Nothing about it is comparable. It's like saying people hate snakes and some people also hate dogs therefor snakes are dogs.

  • This post did not contain any content.

    What is the actual justification for this? Everyone has to pay for this except for AI companies, so AI can continue to develop into a universally regarded negative?

  • What is the actual justification for this? Everyone has to pay for this except for AI companies, so AI can continue to develop into a universally regarded negative?

    why do you say AI is a universally regarded negative?

    Edit: if you're going to downvote me, can you explain why? I am not saying AI is a good thing here. I'm just asking for evidence that it's universally disliked, i.e. there aren't a lot of fans. It seems there are lots of people coming to the defense of AI in this thread, so it clearly isn't universally disliked.

  • Is the ai doing anything that isn’t already allowed for humans. The thing is, generative ai doesn’t copy someone’s art. It’s more akin to learning from someone’s art and creating you own art with that influence. Given that we want to continue allowing hunans access to art for learning, what’s the logical difference to an ai doing the same?

    Did this already play out at Reddit? Ai was one of the reasons I left but I believe it’s a different scenario. I freely contributed my content to Reddit for the purposes of building an interactive community, but they changed the terms without my consent. I did NOT contribute my content so they could make money selling it for ai training

    The only logical distinction I see with s ai aren’t human: an exception for humans does not apply to non-humans even if the activity is similar

    You picked the wrong thread for a nuanced question on a controversial topic.

    But it seems the UK indeed has laws for this already if the article is to believed, as they don't currently allow AI companies to train on copyrighted material (As per the article). As far as I know, in some other jurisdictions, a normal person would absolutely be allowed to pull a bunch of publicly available information, learn from it, and decide to make something new based on objective information that can be found within. And generally, that's the rationale AI companies used as well, seeing as there have been landmark cases ruled in the past to not be copyright infringement with wide acceptance for computers analyzing copyrighted information, such as against Google, for indexing copyrighted material in their search results. But perhaps an adjacent ruling was never accepted in the UK (which does seem strange, as Google does operate there). But laws are messy, and perhaps there is an exception somewhere, and I'm certainly not an expert on UK law.

    But people sadly don't really come into this thread to discuss the actual details, they just see a headline that invokes a feeling of "AI Bad", and so you coming in here with a reasonable question makes you a target. I wholly expect to be downvoted as well.

  • Is the ai doing anything that isn’t already allowed for humans. The thing is, generative ai doesn’t copy someone’s art. It’s more akin to learning from someone’s art and creating you own art with that influence. Given that we want to continue allowing hunans access to art for learning, what’s the logical difference to an ai doing the same?

    Did this already play out at Reddit? Ai was one of the reasons I left but I believe it’s a different scenario. I freely contributed my content to Reddit for the purposes of building an interactive community, but they changed the terms without my consent. I did NOT contribute my content so they could make money selling it for ai training

    The only logical distinction I see with s ai aren’t human: an exception for humans does not apply to non-humans even if the activity is similar

    Is the ai doing anything that isn’t already allowed for humans. The thing is, generative ai doesn’t copy someone’s art. It’s more akin to learning from someone’s art and creating you own art with that influence. Given that we want to continue allowing hunans access to art for learning, what’s the logical difference to an ai doing the same?

    AI stans always say stuff like this, but it doesn't make sense to me at all.

    AI does not learn the same way that a human does: it has no senses of its own with which to observe the world or art, it has no lived experiences, it has no agency, preferences or subjectivity, and it has no real intelligence with which to interpret or understand the work that it is copying from. AI is simply a matrix of weights that has arbitrary data superimposed on it by people and companies.

    Are you an artist or a creative person?

    If you are then you must know that the things you create are certainly indirectly influenced by SOME of the things that you have experienced (be it walking around on a sunny day, your favorite scene from your favorite movie, the lyrics of a song, etc.), AS WELL AS your own unique and creative persona, your own ideas, your own philosophy, and your own personal development.

    Look at how an artist creates a painting and compare it to how generative AI creates a painting. Similarly, look at how artists train and learn their craft and compare it to how generative AI models are trained. It's an apples-to-oranges comparison. Outside of the marketing labels of "artificial intelligence" and "machine learning", it's nothing like real intelligence or learning at all.

    (And that's still ignoring the obvious corporate element and the four pillars of fair use consideration (US law, not UK, mind you). For example, the potential market effects of generating an automated system which uses people's artwork to directly compete against them.)

  • You picked the wrong thread for a nuanced question on a controversial topic.

    But it seems the UK indeed has laws for this already if the article is to believed, as they don't currently allow AI companies to train on copyrighted material (As per the article). As far as I know, in some other jurisdictions, a normal person would absolutely be allowed to pull a bunch of publicly available information, learn from it, and decide to make something new based on objective information that can be found within. And generally, that's the rationale AI companies used as well, seeing as there have been landmark cases ruled in the past to not be copyright infringement with wide acceptance for computers analyzing copyrighted information, such as against Google, for indexing copyrighted material in their search results. But perhaps an adjacent ruling was never accepted in the UK (which does seem strange, as Google does operate there). But laws are messy, and perhaps there is an exception somewhere, and I'm certainly not an expert on UK law.

    But people sadly don't really come into this thread to discuss the actual details, they just see a headline that invokes a feeling of "AI Bad", and so you coming in here with a reasonable question makes you a target. I wholly expect to be downvoted as well.

    Oh are we giving AI the same rights as humans now?
    On what grounds?

  • Oh are we giving AI the same rights as humans now?
    On what grounds?

    I never claimed that in this case. As I said in my response: There have been won lawsuits that machines are allowed to index and analyze copyrighted material without infringing on such rights, so long as they only extract objective information, such as what AI typically extracts. I'm not a lawyer, and your jurisdiction may differ, but this page has a good overview: https://blog.apify.com/is-web-scraping-legal/

    EDIT: For the US description on that page, it mentions the US case that I referred to: Author's Guild v Google

  • How many authors do you think would have written the books they did, if they weren't able to make a living from their work? Most of the people creating works before copyright either had a patron of some description, or outright worked for an organisation.

    You should read the opinion of Stephen King about that precise point. The short version: "I'd write books even if it was illegal".

  • why do you say AI is a universally regarded negative?

    Edit: if you're going to downvote me, can you explain why? I am not saying AI is a good thing here. I'm just asking for evidence that it's universally disliked, i.e. there aren't a lot of fans. It seems there are lots of people coming to the defense of AI in this thread, so it clearly isn't universally disliked.

    Because pretty much nobody wants it or likes it.

  • Because pretty much nobody wants it or likes it.

    That's just not true, chatgpt & co are hugely popular, which is a big part of the issue.

  • That's just not true, chatgpt & co are hugely popular, which is a big part of the issue.

    Nazism was hugely popular in Germany in the early 20th century, but was it a good thing?

  • This post did not contain any content.

    A new law could soon allow AI companies to use copyrighted material without permission.

    Good. Copyright and patent laws need to die.

    All the money wasted enforcing them and taken from customers could be better spent on other things.

    Creators will still create, as they always have. We just won't have millionaire scumbags such as 'paul mccartney' living like kings while children starve.

  • Microsoft Teams will soon block screen capture during meetings

    Technology technology
    29
    291 Stimmen
    29 Beiträge
    0 Aufrufe
    N
    There are some autonomous cars with lidar out there where the lidar is so powerful it can wreck a camera close up, but is still safe for eyes. Switch up FaceID to use a more powerful laser which will wreck the phones camera, and start making webcams for non macs that are required to have this in them for Teams to work.
  • 8 Stimmen
    3 Beiträge
    0 Aufrufe
    I
    Reminds me of a quote from the game Alpha Centauri: I think, and my thoughts cross the barrier into the synapses of the machine, just as the good doctor intended. But what I cannot shake, and what hints at things to come, is that thoughts cross back. In my dreams, the sensibility of the machine invades the periphery of my consciousness: dark, rigid, cold, alien. Evolution is at work here, but just what is evolving remains to be seen. Commissioner Pravin Lal, “Man and Machine”
  • 326 Stimmen
    20 Beiträge
    0 Aufrufe
    roofuskit@lemmy.worldR
    It's extremely traceable. There is a literal public ledger if every single transaction.
  • Everyone Is Cheating Their Way Through College

    Technology technology
    23
    1
    171 Stimmen
    23 Beiträge
    0 Aufrufe
    L
    i can this for essay writing, prior to AI people would use prompts and templates of the same exact subject and work from there. and we hear the ODD situation where someone hired another person to do all the writing for them all the way to grad school( this is just as bad as chatgpt) you will get caught in grad school or during your job interview. might be different for specific questions in stem where the answer is more abstract,
  • Apple Eyes Move to AI Search, Ending Era Defined by Google

    Technology technology
    2
    10 Stimmen
    2 Beiträge
    0 Aufrufe
    ohshit604@sh.itjust.worksO
    It’s infuriating that Safari/Apple only allows me to choose from five different search engines. I self-host my own SearXNG instance and have to use a third-party extension to redirect my queries.
  • 48 Stimmen
    9 Beiträge
    0 Aufrufe
    F
    Being “locked down” is irrelevant for a device used to read and write on. All those devices are also significantly more powerful than this thing. They all also have keyboard attachments readily available across all sizes and prices. Linux isn’t at all necessary for the use cases the author talks about. Windows would be massively overkill.
  • 119 Stimmen
    55 Beiträge
    0 Aufrufe
    D
    I bet every company has at least one employee with right-wing political views. Choosing a product based on some random quotes by employees is stupid.
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    0 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.