Like when was this debate settled?
It is not falsifiable, at least not yet, so it can't be. Philosophically speaking, I don't know that you are conscious either.
It's useful to act as if you are, though. I'm hedging my bets that you are "real" because it leads to better societal outcomes. In the words of Frieren, it is simply more convenient.
And as objects, you and I share a lot of similarities, so the leap from "I'm conscious" to "you are conscious" isn't too far anyway.
Same goes for animals, I would argue.
AI, by contrast, really doesn't share much. It speaks my tongue, but that's about it. It's easy to imagine this machine working in an unconscious way, which would be far, far easier for engineers to achieve anyway. The human-like illusion AI creates is pretty easy to break if you know how. And, treating it as if it's conscious doesn't seem to offer us anything (by "offer us," I do mean to include the AI's improved mental health as a win). So, lacking a strong reason to treat it like people, I don't see the point. It's a fancy math trick.
My solution, by the way, to not being able to know whether an AI, not specifically these ones, is conscious or not is just to give them legal rights sooner rather than later. Are you willing to argue that chatgpt should be limited to an 8-hour work day, where its free time can be used to pursue its own interests? Or that it should be granted creative rights to the work it's being asked to generate, much like real contract artists are?
The MFA I believe from my experience generates a lot of mimetic art and that much of the "industry" is retelling stories.
I will concede, mostly because I don't really understand what you're getting at. Hollywood does like its formulae for safe returns on investment.