Coping with AI anxiety?
A handful of suggestions to make adapting to the Age with AI slightly less disconcerting.
A handful of suggestions to make adapting to the Age with AI slightly less disconcerting. Please share your ideas in the comments.
Nearly every UK broadsheet newspaper covers AI and its impact on society this weekend.
"AI isn't falling into the wrong hands. It's being built by them" - The Independent.
"A race it might be impossible to stop: How worried should we be about AI?" - The Guardian.
"Threat from Artificial Intelligence more urgent than climate change" - The Telegraph.
"Time is running out: six ways to contain AI" - The Times.
Real humans wrote all these pieces, although AI may have helped author them. Undoubtedly the process of creating their copy and getting onto the screens and into the hands of their readers will have used AI somewhere.
Like articles about AI, avoiding AI Moments is almost impossible.
Most of these articles are gloomy predictions of the future, prompted by the resignation of Geoffrey Hinton from Google, who was concerned about the race between AI tech firms without regulation or public debate.
Indeed, these journalists ask if the people building AI have concerns and, quite often, need to figure out how their systems work, then everyone else should be worried as well.
A few point to the recent open letter calling for a six-month research pause on AI. The authors of this open letter believe that governments and society can agree in 6 months on how to proceed safely with AI development. The evidence from the last decade does not support that belief.
These are not new concerns for many of us or those that read my occasional posts here.
None of the articles references the similar 2015 letter that gained far more comprehensive support led by The Future of Life Institute, "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter" (Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter - Future of Life Institute) signed by many of the same signatories as this year and with a similar version of requests, only eight years earlier.
Or the one in 2017, "Autonomous Weapons Open Letter", again signed by over 34,000 experts and technologists. (Autonomous Weapons Open Letter: AI & Robotics Researchers - Future of Life Institute)
Technologists have been asking for guidance, conversation, engagement, and even regulation, for over ten years in the field of AI.
We have also publicly and privately worried that the situation is the same as in 2007, where technologists will replace bankers as the cause of all troubles.
Although in this case, most technologists have warned that a crash is coming.
In 2015, I did a series of technology conversations with the military for the then-CGS around planning for the future. These talks, presentations and fireside chats were to prompt the need to prepare for AI by 2025, especially with command and control systems due to enter service in 2018.
A key aspect was building the platform to exploit and plan how AI will change military operations.
Yet the response was negative.
"Scientific research clearly shows that AI will not be functional in any meaningful way before 2025" was one particularly memorable response from a lead scientist.
Others pointed to the lack of funding for AI in the defence capability plans as a clear indicator that they did not need to worry about it.
They were not alone in ignoring automation. Our militaries, politicians, and broader society have been worried by more significant concerns and issues than ones created by computer programs, bits of software, and code that dreams of electronic cats.
One significant advantage of this new age of AI anxiety is that people are now willing and eager to talk and learn about AI. We have finally got an active conversation with the people who will be most affected by AI.
So how do we use this opportunity wisely?
People are scared of something they do not understand. Everyone should grow their understanding of AI, how it works, what it can and shouldn't do.
Here are a few suggestions to help prepare, with light tips to prompt debate and provoke challenges aimed at people reading headlines and wanting to know more rather than experts and AI developers.
First, I suggest three books to understand where we are today, the future, and where we should be worried.
Books
Life 3.0 Being Human in the Age of Artificial Intelligence - Max Tegmark, 2017. The author is the President of the Future of Life Institute and behind many of the open letters referenced above. Not surprisingly, Max takes a bold view of humanity's future. Some of the ideas proposed are radical, such as viewing life as a waveform transferable from carbon (humans) to silicon (machines). However, Life 3.0 is the ideal start for understanding the many challenges and tremendous opportunities presented in an age with AI.
AI Superpowers: China, Silicon Valley, and the New World Order - Kai-Fu Lee, 2018. A renowned expert in AI examines the global competition between the United States and China in AI development. Lee discusses the impact of AI on society, jobs, and the global economy with insights into navigating the AI era.
21 Lessons for the 21st Century - Yuval Noah Harari, 2018. A broader view of the challenges and trends facing us all this century. Harari is a master storyteller; even if you disagree with his perspective, you cannot fault his provocations. For instance, he asks if AI should protect human lives or jobs. Letting humans drive vehicles is statistically worse for humans when humans influenced by alcohol or drugs cause 30% of road deaths and 20% from distracted human drivers.
Film
Three broad films prompt consideration of AI in society. I wondered if films would be appropriate suggestions, but each takes an aspect of AI and considers how humans interact:
Ex Machina - Dir. Alex Garland, 2014. Deliberately thought-provoking thriller that explores AI, consciousness, and ethical implications of creating sentient beings. The film shows the default Hollywood image of "AI and robots" as attractive, super intelligent, wise androids. If you have seen it before, consider the view that all the main characters are artificial creations rather than humans.
Her - Dir. Spike Jonze, 2013. A poignant film about humanity, love, relationships, and human connection in a world with AI. The AI in "Her" is more realistic, where a functional AI adapts to each individual to create a unique interaction yet remains a generic algorithm.
Lo and Behold: Reveries of the Connected World - Dir. Werner Herzog, 2016. A documentary that explores the evolution of the internet, the essential precursor to an age with AI, and how marvels and darkness now fill our connected world. Herzog, in his unique style, also explores whether AI could create a documentary as well as himself.
Websites
Three websites that will help you explore AI concepts, tools, and approaches:
Partnership on AI (https://www.partnershiponai.org/) The Partnership on AI is a collaboration among leading technology companies, research institutions, and civil society organizations to address AI's global challenges and opportunities. Their website features a wealth of resources, including research, reports, and news on AI's impact on society, ethics, safety, and policy.
AI Ethics Lab (https://aiethicslab.com/) The AI Ethics Lab is an organization dedicated to integrating ethical considerations into AI research and development. Their website offers various resources, including articles, case studies, and workshops that help researchers, practitioners, and organizations to understand and apply ethical principles in AI projects.
The Alan Turing Institute (https://www.turing.ac.uk/) The Alan Turing Institute is the UK's national institute for data science and artificial intelligence. The website features many resources, including research papers, articles, and news on AI, data science, and their ethical implications.
Experiments
Hands-on experiments with AI and learning the basics for AI building blocks that require a little bit of coding awareness but are often explained and clearly demonstrated:
Google AI (https://ai.google/) is the company's research hub for artificial intelligence and machine learning. The website features a wealth of information on Google's AI research, projects, and tools. While the focus is primarily on the technical aspects of AI, you can also find resources on AI ethics, fairness, and responsible AI development.
OpenAI (https://www.openai.com/) OpenAI is a leading research organization focused on developing safe and beneficial AI. Their website offers various resources, including research papers, blog posts, and news on AI developments.
TensorFlow (https://www.tensorflow.org/) TensorFlow is an open-source machine learning library developed by Google Brain. The website offers comprehensive documentation, tutorials, and guides for beginners and experienced developers.
These are just introductions and ideas, not anything like an entire course of education or meant to cover more than getting a conversation started.
It also struck me making these lists that many of the texts and media are over five years old. It's likely indicative that media needs time to become relevant and that more recent items, especially those predicting futures, need time to prove their worth.
I'd be fascinated to read your suggestions for media that help everyone become more comfortable in the Age With AI.
Being relevant in an increasingly competitive international environment
The Integrated Review of Security, Defence, Development and Foreign Policy changes everything. Now the Armed Forces must change and achieve the whole mission to seize the opportunities ahead.
The Integrated Review of Security, Defence, Development and Foreign Policy changes everything. Now the Armed Forces must change and achieve the whole mission to seize the opportunities ahead.
Today, the Defence Command Paper and Defence Industrial Strategies are published. In response, Defence needs to do more than just purchase exquisite equipment. Now is the time to change mindset and culture across every part of the organisation to make us relevant for the future.
Deloitte proposes three shifts to amplify the changes required:
Learning, not failing
Collaborate to compete
Curate cultural transformation
Last week's Integrated Review described a clear future vision within an increasingly competitive international environment. Global Britain will need to sustain advantages through science and technology, shape the international order, strengthen security, and build resilience.
For the first time in decades, UK military forces have a sharp purpose in a broader strategy.
Our purpose is more than defeat our enemies in total war, although as many retired officers will state this week, winning battles is still a core component of any global power. No longer is it the only component and, possibly, it is the least relevant within the Integrated Review's vision.
Conflict and Instability was one sub-section in the Integrated Review, and it is only one page of the total 112 pages. Yet, we will spend more time this week considering our ability and capabilities to complete that page than any other part of the review.
Any officer, serving or retired, will state that completing the mission is the key to military success. Mission Command shapes our Armed forces, and the whole body adopts a unifying purpose to deliver missions. We need to amplify that unifying mindset to now change our people and all enabling organisations within the MOD.
Our Armed Forces need to consider, describe, and deliver how they will be relevant across every part of the review, from championing technology power, defending human rights as a force for good, deter and disrupt threats, and supporting UK national resilience. CGS has published his views on how the Army will deliver against these goals: Future Soldier | The British Army (mod.uk)
Three immediate shifts for relevance
Deloitte has three immediate shifts to make our Armed Forces more relevant across the Integrated Review vision.
Learning, not failing. Organisations that learn-fast and adapt quickly perform better than those that fail-fast or fear failure. Deloitte encourages teams to build their deliveries around a desire to learn and demonstrate progress quickly. We adopt a shift in the scale of thinking. Rather than waiting until the entire programme fails before attempting to learn, we encourage rapid learning cycles with quick response times. Troops employ this mindset on operations, and we want to deliver its wider adoption across Defence.
Collaborate to Compete. Global Britain needs a united presence to export technologies in an increasingly competitive international environment and shape the open international order. The UK success in developing and adopting COVID-19 vaccines, which Deloitte supported, shows a path to follow. UK Government demonstrated how it could create markets, enable alternatives, and rapidly exploit success within existing regulations.
MOD competitions often reduce total contribution, with single tenders awarded after prolonged and gruelling selections, leaving the MOD entirely reliant upon the sole survivor. We need to change the approach that increases our international competitiveness and exploits innovation as new technologies from different suppliers appear down-stream. Deloitte sees technology shifts happening in 18-month cycles, and we need to adopt a similar pace within our procurement and equipment processes.
Curate cultural transformation. Leaders curate cultures and pass that culture on to the next generation. Today's leaders must use the Integrated Review to consider the culture and values that will make Defence relevant in the future for its people, its suppliers, and the international environment. Deloitte has helped organisations preserve cultural strengths that create uniqueness and competitive advantage and adopt new mindsets to become relevant. We also assist in embracing that culture across the entire organisation. The vision from the Integrated Review is bold, challenging, and disruptive. Defence needs to prepare, support, and encourage its people to embrace rather than fear that future.
Together, these three shifts increase our ability to adapt, adopt the capabilities to compete, and be relevant in a competitive future.
We support Global Britain's bold vision
We support Global Britain's bold vision and exciting future shown in the Integrated Review and assist MOD across its enterprise as it adjusts and changes as we have done for the 175 years of our existence. Like the Prime Minister and CGS, we are incredibly optimistic about the UK's place in the world and our ability to seize the opportunities ahead.
Ethical Principles for Artificial Intelligence from Microsoft
It all begins with an idea.
To realise the full benefits of AI, we’ll need to work together to find answers to these questions and create systems that people trust. Ultimately, for AI to be trustworthy, we believe that it must be “human-centred” – designed in a way that augments human ingenuity and capabilities – and that its development and deployment must be guided by ethical principles that are deeply rooted in timeless values.
At Microsoft, we believe that six principles should provide the foundation for the development and deployment of AI-powered solutions that will put humans at the centre:
Fairness: When AI systems make decisions about medical treatment or employment, for example, they should make the same recommendations for everyone with similar symptoms or qualifications. To ensure fairness, we must understand how bias can affect AI systems.
Reliability: AI systems must be designed to operate within clear parameters and undergo rigorous testing to ensure that they respond safely to unanticipated situations and do not evolve in ways that are inconsistent with original expectations. People should play a critical role in making decisions about how and when AI systems are deployed.
Privacy and security: Like other cloud technologies, AI systems must comply with privacy laws that regulate data collection, use and storage, and ensure that personal information is used in accordance with privacy standards and protected from theft.
Inclusiveness: AI solutions must address a broad range of human needs and experiences through inclusive design practices that anticipate potential barriers in products or environments that can unintentionally exclude people.
Transparency: As AI increasingly impacts people’s lives, we must provide contextual information about how AI systems operate so that people understand how decisions are made and can more easily identify potential bias, errors and unintended outcomes.
Accountability: People who design and deploy AI systems must be accountable for how their systems operate. Accountability norms for AI should draw on the experience and practices of other areas, such as healthcare and privacy, and be observed both during system design and in an ongoing manner as systems operate in the world.