2024 is poised to be the largest election year in history. There will be 74 elections and an estimated 4 billion voters globally, half the world’s population. Countries such as the USA, UK, India, Russia, South Africa, and Nigeria are among those involved. With over 25 years in the communications and reputation management industry, I’ve witnessed the crucial role of effective communication in achieving success. Communication not only shapes opinions and reputations but also influences beliefs. This is a pathway to power for political leaders. 2024 marks a pivotal moment where AI generative communications and democracy converge.
The radical evolution of communications in the 21st century
Throughout human history, communication has been a remarkable fabric of society. Prehistoric humans relied on verbal means—spoken language, storytelling, and gestures—to share information and coordinate group activities. The development of written language around 3200 BCE marked a significant leap forward in recording and sharing information. Johannes Gutenberg’s invention of the printing press in 1440 enabled the mass production of books and pamphlets, facilitating the widespread dissemination of knowledge and transforming access to information.
The invention of the Telegraph, over 400 years later, introduced long-distance communication through Morse code. Alexander Graham Bell’s invention of the telephone allowed real-time voice communication over long distances. The 20th century witnessed the rise of electronic communications through radio and television. In 1989, Sir Tim Berners-Lee’s creation of the World Wide Web revolutionised global communications.
In the past 20 years, social media and smartphones have dominated our communication landscape. According to Kepsios, 61.4% of the world’s population uses social media, with Facebook alone boasting +3 billion users. AI generative models like ChatGPT represent the latest frontier, capable of understanding and generating human-like text, enabling advanced conversational interfaces, content creation, and personalized communication. Upon its launch in November 2022, ChatGPT acquired 1 million users in 5 days. By December 2023, it had 180 million active weekly users, with 1.5 billion website visits in October 2023 alone. The scale and pace of AI development are almost unimaginable. Its impact will potentially influence democracy this year.
Alerting all communications professionals and voters
Communication strategists and campaigners are poised to integrate AI like never before. However, a recent report by Golin has analysed the impact of AI tools on crisis management and issues. The research reveals that nearly 60% of communications professionals have yet to adapt their reputation management strategies to account for AI. This poses a concern for prominent figures, businesses and politics.
Jessica Shelver, Managing Director at Digitalis, a firm specialising in digital risk and online reputation, expressed concern, stating, “There’s a significant risk of miscommunication or misinterpretation. Generative AI might produce responses that are inaccurate or not aligned with the intended messaging, potentially leading to misunderstandings, reputational damage, and privacy breaches.”
This holds considerable implications for elections. It will be as poignant as Barack Obama’s 2008 presidential campaign, a turning point in leveraging social media for political purposes. James Hann, Managing Director and Head of Risk at Digitalis, whose Government Practice assists governments in understanding and navigating the digital landscape and managing its risks, highlighted, said: “2023 has been characterised by the widespread and accessible use of generative AI tools, especially in the online landscapes surrounding conflicts. These tools bring unique challenges, and with some social media platforms actively cutting back on moderation and safety teams, we are looking ahead to the obstacles we expect clients to face next year.”
The threat posed by misinformation and disinformation to our societies is now well-acknowledged. Recent geopolitical events have underscored the growing nature of this threat with the use of AI. Hann said, “Mis and disinformation are often used interchangeably, but there’s a subtle difference. Misinformation refers to false or inaccurate information spread unintentionally, whereas disinformation involves deliberately spreading false information to deceive people.” He further expressed concern about the use of deepfakes in the Slovak elections. It illustrated how AI technology can be used to manipulate public opinion and undermine democracy. It’s a worrisome trend in today’s landscape of rapid AI and social media advancements creating fresh challenges for governments.
Understanding AI’s Perils, Pitfalls, and Potential
The interplay between human influence and AI technologies has reached a critical juncture. Navigating these AI-driven communications requires a prudent approach. The dynamics between humans and AI in shaping reputations and political landscapes call for careful scrutiny and vigilant oversight.
As we navigate this terrain, it’s crucial to acknowledge the immediate risks, concealed threats, and opportunities brought forth by AI in communication and democracy.
Tackling these challenges demands a multifaceted approach involving policy frameworks and media literacy. Strong AI governance, ethical guidelines, and transparent AI-generated content can mitigate misinformation risks and uphold the integrity of democratic processes.
Despite the risks involved, AI harbours immense potential to enhance human capabilities and drive positive change. Utilising AI for swift and accurate content creation can expand information accessibility. However, in this era of hyperconnectivity, discerning between genuine human-generated content and AI-generated narratives has grown increasingly complex. As we step into this year, we should adhere to three golden rules.
Perform a Digital Audit: Since ChatGPT gathers information from online sources like Wikipedia, websites, and digital media, consider conducting a digital reputation audit. Companies like Digitalis use proprietary technology to trawl the internet and social media to source information about you or your organisation. This enables you to identify potential threats and inaccuracies, which you can then potentially correct or request to be taken down.
Be Cautious with Information: Human psychology heavily influences trust in AI-generated content. Trust builds when AI communication aligns with human expectations of authenticity and reliability. Algorithms cater to preferences, so fact-check and don’t blindly trust all information generative AI produces.
Explore and Learn: Take the time to comprehend how AI can aid effective communication for you or your organisation. Understanding your audience and crafting a compelling narrative is key to utilising AI effectively.
Knowledge is power, but remember, you control what you see, hear and read. Human brains got us this far. Now, it’s time to use our brains to manage AI tools and our votes wisely.
Heidi Mallace is the Co-Founder of Curayio, a communications consultancy which advises, coaches and trains individuals, teams and businesses for success.