While social media can fuel populism and political polarization, chat AI has the potential to guide people away from extreme opinions and towards more moderate stances.

As populism and political polarization rise in various countries, changes in the information environment brought about by social media are cited as one of the causes. John Byrne Murdoch , columnist and chief data reporter for the Financial Times, analyzed that chat AI, in contrast to social media, is likely to steer people away from extreme opinions and towards more moderate positions.
Social media is populist and polarizing; AI may be the opposite
https://www.ft.com/content/3880176e-d3ac-4311-9052-fdfeaed56a0e
Murdoch points out that over the past 15 years, populism and polarization have expanded, while trust in expertise and the established system has declined. There are various factors contributing to these trends, but one of the most important is the change in the information environment characterized by the rise of social media. Social media gives people outside of the elite and experts a strong voice, making a wide range of opinions visible, but it also has the problem of amplifying the voices of extremists and dissidents.
One of the newer changes that has occurred in recent years is the rise of chat AI. In modern times, many people use chat AI on a daily basis, asking questions about things they are curious about or having the AI verify the validity of others' claims. Therefore, it can be said that chat AI has the potential to influence people's political perceptions and stances.
Before discussing chat AI, Murdoch discusses the difference between social media companies and AI development companies. Social media companies essentially profit by attracting people's attention, and in the process, they tend to disregard the truth and favor sensational or inflammatory content. In response to these criticisms, social media companies have maintained the pretense that they are merely neutral platforms where people share information.
On the other hand, as British philosopher Dan Williams argues , AI development companies primarily serve individuals and businesses seeking accurate and important business information. Therefore, they risk being held liable if their chat AI provides false or harmful content. As a result, AI development companies have an incentive to prioritize objective facts, and chat AI tends to 'converge' towards a more moderate direction that is closer to the truth.
To test this theory, Murdoch analyzed the political tendencies of social media posts and conversations with chat AI. In the graph in post X below, the left side represents left-wing tendencies and the right side represents right-wing tendencies. Gray represents the political tendencies of the general public, pink represents the political tendencies of content posted on social media, and blue represents the political tendencies of conversations with chat AI. It can be seen that social media posts tend to be more polarized than the general public, while conversations with chat AI tend to be more centrist overall, with reduced polarization.
While social media is polarizing, evidence suggests AI may nudge people towards the centre.
— Stefan Schubert (@StefanFSchubert) March 28, 2026
This holds true of all studied models. Grok is more right-leaning than other models, but also has depolarizing effects.
By @jburnmurdoch . pic.twitter.com/Fokx869fVq
The graph below shows that the left side represents a left-wing tendency and the right side represents a right-wing tendency. Gray represents the political tendencies of the general public, and blue represents the political tendencies of conversations with ChatGPT, Gemini, DeepSeek, and Grok, respectively. While Grok is slightly more right-wing than the other chat AIs, it can be seen that overall the polarization has decreased, and conversations are becoming more centrist.
https://t.co/vDyx1EPKiT pic.twitter.com/8SzQZO9Pmu
— Stefan Schubert (@StefanFSchubert) March 28, 2026
Murdoch pointed out that even if the chat AI knew the users' political leanings, it still had the effect of diverting people away from hardline opinions. He also noted that the chat AI rarely agreed with conspiracy theories such as election fraud or the link between vaccines and autism.
Murdoch stated, 'These are merely one analysis, and usage patterns and AI models themselves may evolve in directions different from those I have discovered. However, we can at least be optimistic that the next information revolution (chat AI) may lead us in a direction that is less damaging to society than the previous one (SNS).'
Related Posts:
in AI, Posted by log1h_ik







