B
BriefMyNews

AI Chatbots for News: Should You Use ChatGPT for Your Daily Briefing?

More people are asking AI chatbots for news summaries. But are they reliable? Here's an honest assessment of using ChatGPT, Perplexity, and others for news.

AI chatbots have exploded in popularity. ChatGPT has between 300-400 million weekly users, and competitors like Perplexity, Gemini, and Claude are growing fast. A growing number of people are using these tools to get news summaries. But should you?

The Current State of AI News Consumption

Despite the hype, AI chatbot use for news remains limited. According to Pew Research, only about 2% of adults get news often from AI chatbots, with another 7% doing so sometimes. The vast majority (75%) never use chatbots for news at all.

However, the Reuters Institute forecasts that chatbots will become a significant news distribution channel in 2026, comparing them to "the new app stores" where publishers will need to be discoverable inside conversations.

What AI Chatbots Do Well for News

Summarisation

Chatbots are excellent at condensing complex stories into concise summaries. You can ask "What happened with [topic] today?" and get a quick overview in plain language.

Customisation

You can ask for news in your specific area of interest, at your preferred level of detail, and even in a particular tone or format. This level of customisation is hard to get from traditional sources.

Convenience

A conversational interface feels natural. You can follow up with questions, ask for more context, or request different perspectives on the same story.

The Serious Problems

Hallucination

AI chatbots can generate plausible-sounding but entirely fabricated information. For news, this is a critical flaw. If a chatbot invents a quote, misattributes a statement, or fabricates a statistic, you have no way of knowing unless you verify it against a real source.

No accountability

When a newspaper publishes something wrong, there are consequences: corrections, retractions, reputation damage, even legal liability. When a chatbot gets something wrong, nothing happens. There's no editorial standard, no fact-checking process, and no accountability mechanism.

Source opacity

Most chatbots don't clearly show where their information comes from. Even when they cite sources, the citations can be inaccurate or link to paywalled content you can't verify. You don't know whether the information comes from Reuters or from a random blog.

Timeliness gaps

Some chatbots have knowledge cutoffs or delays in indexing new content. You might ask about today's news and get yesterday's (or last week's) information presented as current.

Bias without transparency

Interestingly, users who regularly use chatbots for news tend to perceive them as unbiased. But chatbots inherit biases from their training data, and unlike a labelled news source, there's no way to see or correct for this bias.

A Better Approach

AI chatbots can supplement your news consumption, but they shouldn't replace verified sources. A practical approach:

  • Use a verified news digest as your primary source. BriefMyNews delivers news from real, labelled sources with editorial accountability. Every story comes from a known publication with a known perspective.
  • Use AI for follow-up questions. If a story in your digest interests you, you can ask a chatbot for background context or to explain complex aspects. But treat the output as a starting point, not as verified fact.
  • Always verify AI-generated news claims. If a chatbot tells you something surprising, check it against a real source before accepting or sharing it.
  • Don't rely on AI for breaking news. Chatbots may not have the latest information, and their tendency to hallucinate is most dangerous when facts are still emerging.

The Future

AI will increasingly shape how we discover and consume news. But the fundamentals haven't changed: you need verified information from accountable sources, with transparent editorial perspectives. BriefMyNews provides this with the personalisation that AI promises, but backed by real journalism rather than generated content.

Frequently Asked Questions

Can I use ChatGPT for daily news?
You can, but with caution. ChatGPT can hallucinate (generate false information), doesn't clearly cite sources, has no editorial accountability, and may not have the most current information. It works better as a supplement to verified news sources than as a replacement for them.
Are AI chatbots reliable for news?
Not yet. Only 2% of adults regularly use chatbots for news. The main concerns are hallucination (fabricated facts), source opacity (you don't know where the information comes from), and no accountability when errors occur. For reliable news, use sources with editorial standards and transparent attribution.
What's better for news: AI chatbots or email digests?
Email digests from services like BriefMyNews are more reliable because every story comes from a verified, labelled source with editorial accountability. AI chatbots can supplement your reading by explaining complex topics, but shouldn't be your primary news source due to hallucination and accountability concerns.
Will AI replace traditional news sources?
AI will change how news is distributed and consumed, but verified journalism from accountable sources remains essential. The value of knowing where your information comes from and who stands behind it doesn't decrease as AI grows; it becomes more important.

Related Articles

Ready to take control of your news?

Choose your topics, pick your sources, set your schedule. Free to start.

Get Started Free