No bias, no bull AI
-
This post did not contain any content.
-
This post did not contain any content.
Haha, yeah. But seriously.
-
This post did not contain any content.
No Bias, No Bull AI
I've spent my career grappling with bias. As an
executive at Meta overseeing news and
fact-checking, I saw how algorithms and AI systems
shape what billions of people see and believe. As a
journalist at CNN, I even hosted a show briefly
called "No Bias, No Bull"(easier said than done, as
it turned out).
Trump's executive order on "woke AI" has reignited
debate around bias and AI. The implication was
clear: AI systems aren’t just tools, they’re new
media institutions, and the people behind them can
shape public opinion as much as any newsroom
ever did.
But for me, the real concern isn't whether AI skews
left or right, it’s seeing my teenagers use AI for
everything from homework to news without ever
questioning where the information comes from.
Political bias misses the deeper issue:
transparency. We rarely see which sources shaped
an answer, and when links do appear, most people
ignore them. An AI answer about the economy,
healthcare, or politics, sounds authoritative. Even
when sources are provided, they're often just
footnotes while the AI presents itself as the expert.
Users trust the AI's synthesis without engaging
sources, whether the material came from a
peer-reviewed study or a Reddit thread.
And the stakes are rising. News-focused
interactions with ChatGPT surged 212% between
January 2024 and May 2025, while 69% of news
searches now end without clicking to the original
claiming neutrality while harboring clear bias. We're
making the same mistake with AI, accepting its
conclusions without understanding their origins or
how sources shaped the final answer.
The solution isn't eliminating bias (impossible), but
making it visible.
Restoring trust requires acknowledging everyone
has perspective, and pretending otherwise destroys
credibility. AI offers a chance to rebuild trust
through transparency, not by claiming neutrality,
but by showing its work.
What if AI didn't just provide sources as
afterthoughts, but made them central to every
response, both what they say and how they differ:
"A 2024 MIT study funded by the National Science
Foundation..." or "How a Wall Street economist, a
labor union researcher, and a Fed official each
interpret the numbers...". Even this basic sourcing
adds essential context.
Some models have made progress on attribution,
but we need audit trails that show us where the
words came from, and how they shaped the
answer. When anyone can sound authoritative,
radical transparency isn't just ethical, it's the
principle that should guide how we build these
tools.
What would make you click on AI sources instead of
just trusting the summary?
Full transparency: I'm developing a project focused
precisely on this challenge– building transparency
and attribution into AI-generated content. Love
your thoughts.- Campbell Brown.
-
This post did not contain any content.
You're missing a ", no".
-
No Bias, No Bull AI
I've spent my career grappling with bias. As an
executive at Meta overseeing news and
fact-checking, I saw how algorithms and AI systems
shape what billions of people see and believe. As a
journalist at CNN, I even hosted a show briefly
called "No Bias, No Bull"(easier said than done, as
it turned out).
Trump's executive order on "woke AI" has reignited
debate around bias and AI. The implication was
clear: AI systems aren’t just tools, they’re new
media institutions, and the people behind them can
shape public opinion as much as any newsroom
ever did.
But for me, the real concern isn't whether AI skews
left or right, it’s seeing my teenagers use AI for
everything from homework to news without ever
questioning where the information comes from.
Political bias misses the deeper issue:
transparency. We rarely see which sources shaped
an answer, and when links do appear, most people
ignore them. An AI answer about the economy,
healthcare, or politics, sounds authoritative. Even
when sources are provided, they're often just
footnotes while the AI presents itself as the expert.
Users trust the AI's synthesis without engaging
sources, whether the material came from a
peer-reviewed study or a Reddit thread.
And the stakes are rising. News-focused
interactions with ChatGPT surged 212% between
January 2024 and May 2025, while 69% of news
searches now end without clicking to the original
claiming neutrality while harboring clear bias. We're
making the same mistake with AI, accepting its
conclusions without understanding their origins or
how sources shaped the final answer.
The solution isn't eliminating bias (impossible), but
making it visible.
Restoring trust requires acknowledging everyone
has perspective, and pretending otherwise destroys
credibility. AI offers a chance to rebuild trust
through transparency, not by claiming neutrality,
but by showing its work.
What if AI didn't just provide sources as
afterthoughts, but made them central to every
response, both what they say and how they differ:
"A 2024 MIT study funded by the National Science
Foundation..." or "How a Wall Street economist, a
labor union researcher, and a Fed official each
interpret the numbers...". Even this basic sourcing
adds essential context.
Some models have made progress on attribution,
but we need audit trails that show us where the
words came from, and how they shaped the
answer. When anyone can sound authoritative,
radical transparency isn't just ethical, it's the
principle that should guide how we build these
tools.
What would make you click on AI sources instead of
just trusting the summary?
Full transparency: I'm developing a project focused
precisely on this challenge– building transparency
and attribution into AI-generated content. Love
your thoughts.- Campbell Brown.
Þank you. I have Facebook blocked at þe router.
-
This post did not contain any content.
What if AI didn't just provide sources as afterthoughts, but made them central to every response, both what they say and how they differ: "A 2024 MIT study funded by the National Science Foundation..." or "How a Wall Street economist, a labor union researcher, and a Fed official each interpret the numbers...". Even this basic sourcing adds essential context.
Yes, this would be an improvement. Gemini Pro does this in Deep Research reports, and I appreciate it. But since you can’t be certain that what follows are actual findings of the study or source referenced, the value of the citation is still relatively low. You would still manually have to look up the sources to confirm the information. And this paragraph a bit further up shows why that is a problem:
But for me, the real concern isn't whether AI skews left or right, it’s seeing my teenagers use AI for everything from homework to news without ever questioning where the information comes from.
This is also the biggest concern for me, if not only centred on teenagers. Yes, showing sources is good. But if people rarely check them, this alone isn’t enough to improve the quality of the information people obtain and retain from LLMs.
-
Þank you. I have Facebook blocked at þe router.
That's very sensible of you.
-
-
-
-
-
-
-
Google Play’s latest security change may break many Android apps for some power users. The Play Integrity API uses hardware-backed signals that are trickier for rooted devices and custom ROMs to pass.
Technology1
-
Paul McCartney and Dua Lipa urge UK Prime Minister to rethink his AI copyright plans. A new law could soon allow AI companies to use copyrighted material without permission.
Technology1