‘AI is not the biggest threat. Getting journalism wrong is’
Veteran media professionals Michael Cooke and Murdoch Davis spoke to TBS on the sidelines of the Bangladesh Journalism Conference 2026 about artificial intelligence, newsroom ethics, misinformation and the future of journalism jobs
Artificial intelligence is rapidly reshaping newsrooms around the world, raising urgent questions about ethics, transparency, misinformation and the future of journalism itself.
Speaking to The Business Standard during the Bangladesh Journalism Conference 2026 organised by Media Resourses Development Initiative (MRDI), veteran media professionals Michael Cooke, former editor of Canada's Toronto Star, and Murdoch Davis, a seasoned journalist and media executive with a 50-year career spanning the print and digital eras, discussed how news organisations should approach AI, particularly in developing countries where many smaller newsrooms still lack resources and formal oversight policies.
In the conversation, they discussed newsroom transparency, AI-driven misinformation, the risks facing entry-level journalism jobs and why public trust remains journalism's most valuable asset.
TBS: Many newsrooms are rapidly adopting AI tools, but ethical guidelines are still unclear. What should be the minimum ethical standard for AI use in journalism today?
Cooke and Davis: News organisations should begin developing serious and responsible AI policies immediately, if they have not already done so.
The foundation of any newsroom AI policy should remain commitment to truth, honesty and accuracy.
Smaller organisations do not necessarily need to create policies from scratch. Instead, they can study existing frameworks used by other media organisations and adapt them according to their own realities and newsroom capacities.
The important thing is to start the discussion seriously.
In developing countries like Bangladesh, smaller newsrooms often lack resources for proper AI oversight. How can they use AI responsibly without compromising credibility?
Responsible AI use is still possible even without large budgets or advanced technical infrastructure.
Smaller newsrooms can begin with basic internal guidelines defining what AI should and should not be used for. AI should never replace verification or factual reporting.
The core issue is not technology itself. The core issue is whether journalism gets things right or wrong. Careless use of AI could increase the chances of factual mistakes and ultimately damage audience trust.
Do you think audiences should always be informed when AI is used in reporting, editing or content production?
At least for now, transparency is important. Journalism is currently going through a period of public distrust around AI, and news organisations should avoid doing anything that weakens credibility further.
However, defining AI-assisted work remains complicated.
Questions such as whether spell-checking, grammar correction or shortening stories count as AI-assisted editing are still being debated inside the industry. We are still figuring out those boundaries.
AI can increase newsroom efficiency, but it can also amplify misinformation and bias. Which risk concerns you the most right now?
The biggest concern is simple: getting journalism wrong.
Sloppy reporting, weak editorial judgment and careless use of technology can all accelerate misinformation. In politically charged environments, those mistakes can spread rapidly and become even more dangerous.
Some journalists fear AI may gradually replace entry-level reporting jobs. Do you think that concern is realistic?
Many routine newsroom tasks are vulnerable to automation.
One example discussed was basic reporting built around official statements, meeting minutes or structured information, work that has traditionally been handled by junior reporters.
AI can already process and summarise such material quickly, potentially reducing some entry-level opportunities in journalism.
At the same time, real reporting, fieldwork, verification and editorial judgment still require human journalists.
In politically polarised environments like South Asia, could AI-generated misinformation become even more dangerous during elections or political crises?
It is not a future risk. It is already happening.
There is a growing spread of deepfakes, manipulated videos and AI-generated misinformation globally. Polarised societies are especially vulnerable during elections and political crises.
The speed and scale of AI-generated misinformation make it particularly difficult for journalists to counter false narratives before they spread widely online.
What mistakes are news organisations currently making in their rush to integrate AI into newsrooms?
Interestingly, many newsrooms are not rushing into AI at all. Instead, many organisations are still moving cautiously and, in some cases, too slowly.
However, newsrooms should already be having serious discussions about ethics, transparency and newsroom standards before AI tools become even more widespread.
Looking ahead five years, do you think AI will strengthen journalism overall or weaken public trust in media further?
The answer depends largely on how news organisations choose to behave.
Newsrooms genuinely committed to truth, fairness and honesty would continue trying to uphold those values regardless of technological changes.
At the same time, disinformation websites and low-quality content operations could also expand rapidly using AI tools.
Trust is handmade by people. It cannot simply be manufactured by machines.
