Building Trust in AI and Democracy Together.
The Technology Industry and Politicians have a common issue. They both need public trust. Together, A.I. companies and politicians can build popular trust by turning fast intelligence upon themselves.
In 1995, the Nolan Report outlined the Seven Principles of Public Life that apply to anyone who works as a public officeholder, including all elected or appointed to public office. These principles are Honesty, Openness, Objectivity, Selflessness, Integrity, Accountability and Leadership. The report and review became legislation that applies to all U.K. public officeholders.
Consider those principles with current A.I. ethical guidance; you will see a remarkable similarity. The Deloitte TrustworthyAI™ principles are Transparency, Responsibility, Accountability, Security, Monitoring for Reliability, and Safeguarding Privacy. Microsoft covers Accountability, Inclusiveness, Reliability, Fairness, and Transparency. Not all headline words are the same, but the pattern is similar between those principles to ensure ethical behaviour in politicians and those to ensure safe A.I. adoption.
There should be no surprise here. Since the earliest concept of democracy as a political model, principles have existed to ensure that democratic officials are accountable, transparent, and honest in their actions. Checks and balances were first introduced in Greece, where leaders could be ostracised if deemed harmful to the state, and in Rome, where legal avenues for citizens to bring grievances against officials who abused their power.
Adopting similar principles to ensure good governance of A.I. is sensible, but there is even more that both sides can learn from each other. Democracy provides significant case studies where checks and balances have failed, and the technology industry should learn from these lessons. Equally, politicians should be open to using A.I. widely to strengthen democracies and build public trust in their words and actions.
Societal trust in both politicians and A.I. is needed.
Transparency and accountability are two core principles for successful democratic government that appear in most ethical A.I. guidance. Delving deeper into both provides lessons and opportunities for the governance of each.
Historically, transparency was not always the norm. Transparency, in the context of modern governance, is not merely an abstract principle but a tangible asset that drives the efficacy and trustworthiness of a political system. It forms the bedrock for the relationship between the governed and the governing, ensuring that power remains accountable.
Transparency empowers citizens by giving them the tools and information they need to hold their leaders accountable. An informed public can more effectively participate in civic discourse, making democracy more robust and responsive. When citizens can see and understand the actions of their government, they are more likely to trust their leaders and institutions. Transparency, therefore, plays a pivotal role in building societal trust.
Accountability, much like transparency, is a cornerstone of democratic governance. It ensures that those in positions of authority are held responsible for their actions and decisions, serving as a check against potential misuse of power and providing that public interests are at the forefront of governance.
Democracies have institutionalised mechanisms to ensure to ensure leaders can be held accountable for their actions, from Magna Carta in 1215, through John Locke and Montesquieu arguing for separation of powers and legal accountability, to Lincoln’s description of democracy as the “government of the people, by the people, for the people”, to impeachment provisions in the US Constitution or vote of no confidence in parliamentary systems.
Holding those in power accountable has been a foundational principle across various civilisations. This concept has evolved, adapting to different cultures and governance systems, but its core remains unchanged: rulers should be answerable to those they govern.
Lincoln’s words are, today, more important than ever.
The collapse of public trust in politicians and public officials is a global phenomenon over the last decade. High-profile examples include Brazil’s Operation Car Wash unveiling widespread corruption within its state-controlled oil company, the impeachment trials of U.S. President Donald Trump, Malaysia’s 1MDB financial fiasco that implicated its then-Prime Minister Najib Razak, Australia’s “Sports Rorts” affair that questioned the integrity of community sports grant allocations, and the U.K.’s Downing Street party allegations against Prime Minister Boris Johnson during COVID-19 lockdowns.
These events, spread across different continents, underscore the pervasive challenges of maintaining transparency and accountability in democracies.
Public trust has also diminished at the same time as the internet has appeared, with the growth of our digital world far surpassing expectations even from forty years ago. In 1998, only some people believed an online economy would be significant for the future global economy. In 2021, during global lockdowns, the interconnected digital economy enabled significant proportions of society to continue working despite restrictions on travel and congregating.
Our digital world has created several challenges that have contributed to the loss of trust:
Proliferation of Sources. The number of information sources has multiplied exponentially. Traditional media, blogs, social media platforms, official websites, and more compete for our attention, often leading to a cacophony of voices. With such a variety of sources, verifying the credibility and authenticity of information becomes paramount.
Paralysis by Analysis. When faced with overwhelming information, individuals may struggle to make decisions or form opinions. This paralysis by analysis can lead to apathy, where citizens may feel that it’s too cumbersome to sift through the data and, as a result, disconnect from civic engagement.
Echo Chambers and Filter Bubbles. The algorithms that power many digital platforms often show users content based on their past behaviours and preferences. This can lead to the creation of echo chambers and filter bubbles, where individuals are only exposed to information that aligns with their pre-existing beliefs, further exacerbating the challenge of discerning truth from a sea of information.
Misinformation and Disinformation. The deliberate spread of false or misleading information compounds the challenge of information overload. In an environment saturated with data, misinformation (false information shared without harmful intent) and disinformation (false information shared with the intent to deceive) can spread rapidly, making it even harder for citizens to discern fact from fiction.
Limited Media Literacy. Most people feel unequipped with the skills to critically evaluate sources, discern bias, and understand the broader context. Media literacy acts as a bulwark against the harmful effects of information saturation and, when not present, enables bad influences to proliferate.
Today, many people promise huge benefits from A.I. adoption, yet public trust remains limited. From fearing killer robots to increased concerns about replacing jobs, there is a solid need to demonstrate the positive opportunities from A.I. as much as discuss the fears.
The core strengths of AI to distil vast and complex datasets into easily understandable insights tailored for individual users can mitigate these challenges, increase transparency and accountability, and rebuild trust.
Curating and presenting political information to revolutionise citizens' political interactions
There’s a continuous stream of information regarding political activities across the vast landscape of political data, from official governmental websites to news portals and social media channels. Governments and parliamentary bodies are increasingly utilising digital platforms for their operations, increasing the volume of data.
Trawling these sources, including real-time events such as legislative sessions and public political addresses, ensuring that every piece of data is captured, is beyond human capabilities, even for those who are dedicated political followers or experts. AI can conduct this task efficiently.
A.I. can be seamlessly integrated into these platforms to track activities such as voting patterns, bill proposals, and committee discussions. By doing so, A.I. can offer a live stream of political proceedings directly to the public. During parliamentary sessions or public addresses, AI-powered speech recognition systems can transcribe and analyse what’s being said in real time. This allows for the immediate dissemination of critical points, decisions, and stances, making political discourse more accessible to the masses.
With real-time activity tracking, A.I. can foster an environment of transparency and immediacy. Citizens can feel more connected to the democratic process, trust in their representatives can be enhanced, and the overall quality of democratic engagement can be elevated.
NLP, a subset of A.I., can be employed to interpret the language used in political discourse. By analysing speeches, official documents, and other textual data, NLP can determine the sentiment, intent, and critical themes of the content, providing a deeper understanding of the context and implications of the content. Politicians and political bodies often communicate with the public through social media channels. A.I. can monitor these channels for official statements, policy announcements, or public interactions, ensuring that citizens are immediately aware of their representatives’ communications.
AI-driven data visualisation tools can transform complex data into interactive charts, graphs, and infographics. This allows users to quickly grasp the essence of the information, understand trends, and make comparisons.
A.I. can power interactive platforms where citizens can receive real-time updates and engage directly by asking questions, voicing concerns, or even participating in polls. This real-time two-way interaction can significantly enhance civic engagement.
Recognising that not all information is relevant to every individual, A.I. can tailor summaries based on user preferences and past interactions. For example, a user interested in environmental policies would receive detailed summaries, while other areas might be condensed.
Importantly, access to this information and insight should be freely available to individuals to ensure everyone becomes more engaged and trusting in democratic governance and politics. While technological companies will be essential to building a trustworthy system, and politicians will benefit from increased trust in their deeds and actions, that will only occur if barriers to access are prevented.
Rights and Responsibilities – Demonstrating that AI and Politicians can be trusted
Of course, there are concerns over these approaches as well as benefits. The approach improves public confidence whilst demonstrating the benefits of safe and trustworthy A.I. adoption and politics, yet needs explicit control and governance to address risks.
There may be concerns about trusting A.I. with such an important task, and a cynical perspective may be that some see benefits in avoiding public scrutiny. Yet, as both A.I. and democratic institutions follow similar ethical principles, there is far more in common between the two systems. These similarities can create a firm basis for mutual benefit that most politicians, technologists, and citizens would support.
It’s crucial to address potential privacy concerns. These political A.I. systems must ensure that personal data is protected and that users can control the information they share. Transparent data practices and robust security measures are imperative to gain users’ trust. At the same time, democracies should not allow privacy to be used to avoid public transparency or accountability.
Objective reporting is paramount for maintaining trust in democratic processes. Given its computational nature, Artificial Intelligence promises to offer impartiality in reporting, but this comes with its own challenges and considerations. Again, those held to account should not seek to introduce bias into the situation, and ethical adoption of A.I. is essential to deliver true objectivity.
Even after deployment, A.I. systems should be monitored continuously to ensure neutrality. Feedback mechanisms, where users can report perceived biases or inaccuracies, can help refine the A.I. and ensure its continued impartiality. As we delegate the task of impartial reporting to A.I., it’s vital to have ethical guidelines in place. These guidelines should address issues like data privacy, the transparency of algorithms, and the rectification of identified biases.
Five immediate opportunities can be implemented today. These would all increase mutual transparency and accountability while increasing public awareness of A.I. benefits and positive employment.
AI-Powered Insights and Summaries to counter the proliferation of data and misinformation.
Automated data collection across media to ensure fair coverage and balance.
Natural Language Processing of public content to avoid echo chambers and filter bubbles.
Automated data visualisation to inform analysis and understanding.
Predictive analysis with user feedback to reduce misinformation and disinformation.
All these tools are available today. All these measures will demonstrate and grow trust in the adoption of A.I. All bring to life the responsible adoption of A.I. for everyone. They will unite the technology industry and politicians around a shared objective. Most importantly, they will begin to restore trust in our democratic governments that have been fundamental to our prosperity, growth, and security.