Coping with AI anxiety?

A handful of suggestions to make adapting to the Age with AI slightly less disconcerting.

A handful of suggestions to make adapting to the Age with AI slightly less disconcerting. Please share your ideas in the comments.

Nearly every UK broadsheet newspaper covers AI and its impact on society this weekend.

"AI isn't falling into the wrong hands. It's being built by them" - The Independent.

"A race it might be impossible to stop: How worried should we be about AI?" - The Guardian.

"Threat from Artificial Intelligence more urgent than climate change" - The Telegraph.

"Time is running out: six ways to contain AI" - The Times.

Real humans wrote all these pieces, although AI may have helped author them. Undoubtedly the process of creating their copy and getting onto the screens and into the hands of their readers will have used AI somewhere. 

Like articles about AI, avoiding AI Moments is almost impossible.

Most of these articles are gloomy predictions of the future, prompted by the resignation of Geoffrey Hinton from Google, who was concerned about the race between AI tech firms without regulation or public debate.

Indeed, these journalists ask if the people building AI have concerns and, quite often, need to figure out how their systems work, then everyone else should be worried as well.

A few point to the recent open letter calling for a six-month research pause on AI. The authors of this open letter believe that governments and society can agree in 6 months on how to proceed safely with AI development. The evidence from the last decade does not support that belief.

These are not new concerns for many of us or those that read my occasional posts here.

None of the articles references the similar 2015 letter that gained far more comprehensive support led by The Future of Life Institute, "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter" (Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter - Future of Life Institute) signed by many of the same signatories as this year and with a similar version of requests, only eight years earlier.

Or the one in 2017, "Autonomous Weapons Open Letter", again signed by over 34,000 experts and technologists. (Autonomous Weapons Open Letter: AI & Robotics Researchers - Future of Life Institute)

Technologists have been asking for guidance, conversation, engagement, and even regulation, for over ten years in the field of AI. 

We have also publicly and privately worried that the situation is the same as in 2007, where technologists will replace bankers as the cause of all troubles. 

Although in this case, most technologists have warned that a crash is coming.

In 2015, I did a series of technology conversations with the military for the then-CGS around planning for the future. These talks, presentations and fireside chats were to prompt the need to prepare for AI by 2025, especially with command and control systems due to enter service in 2018. 

A key aspect was building the platform to exploit and plan how AI will change military operations.

Yet the response was negative. 

"Scientific research clearly shows that AI will not be functional in any meaningful way before 2025" was one particularly memorable response from a lead scientist.

Others pointed to the lack of funding for AI in the defence capability plans as a clear indicator that they did not need to worry about it.

They were not alone in ignoring automation. Our militaries, politicians, and broader society have been worried by more significant concerns and issues than ones created by computer programs, bits of software, and code that dreams of electronic cats.

One significant advantage of this new age of AI anxiety is that people are now willing and eager to talk and learn about AI. We have finally got an active conversation with the people who will be most affected by AI.

So how do we use this opportunity wisely?

People are scared of something they do not understand. Everyone should grow their understanding of AI, how it works, what it can and shouldn't do. 

Here are a few suggestions to help prepare, with light tips to prompt debate and provoke challenges aimed at people reading headlines and wanting to know more rather than experts and AI developers.

First, I suggest three books to understand where we are today, the future, and where we should be worried.

Books

Life 3.0 Being Human in the Age of Artificial Intelligence - Max Tegmark, 2017. The author is the President of the Future of Life Institute and behind many of the open letters referenced above. Not surprisingly, Max takes a bold view of humanity's future. Some of the ideas proposed are radical, such as viewing life as a waveform transferable from carbon (humans) to silicon (machines). However, Life 3.0 is the ideal start for understanding the many challenges and tremendous opportunities presented in an age with AI.

AI Superpowers: China, Silicon Valley, and the New World Order - Kai-Fu Lee, 2018. A renowned expert in AI examines the global competition between the United States and China in AI development. Lee discusses the impact of AI on society, jobs, and the global economy with insights into navigating the AI era.

21 Lessons for the 21st Century - Yuval Noah Harari, 2018. A broader view of the challenges and trends facing us all this century. Harari is a master storyteller; even if you disagree with his perspective, you cannot fault his provocations. For instance, he asks if AI should protect human lives or jobs. Letting humans drive vehicles is statistically worse for humans when humans influenced by alcohol or drugs cause 30% of road deaths and 20% from distracted human drivers.

Film

Three broad films prompt consideration of AI in society. I wondered if films would be appropriate suggestions, but each takes an aspect of AI and considers how humans interact:

Ex Machina - Dir. Alex Garland, 2014. Deliberately thought-provoking thriller that explores AI, consciousness, and ethical implications of creating sentient beings. The film shows the default Hollywood image of "AI and robots" as attractive, super intelligent, wise androids. If you have seen it before, consider the view that all the main characters are artificial creations rather than humans.

Her - Dir. Spike Jonze, 2013. A poignant film about humanity, love, relationships, and human connection in a world with AI. The AI in "Her" is more realistic, where a functional AI adapts to each individual to create a unique interaction yet remains a generic algorithm. 

Lo and Behold: Reveries of the Connected World - Dir. Werner Herzog, 2016. A documentary that explores the evolution of the internet, the essential precursor to an age with AI, and how marvels and darkness now fill our connected world. Herzog, in his unique style, also explores whether AI could create a documentary as well as himself.

Websites

Three websites that will help you explore AI concepts, tools, and approaches:

Partnership on AI (https://www.partnershiponai.org/) The Partnership on AI is a collaboration among leading technology companies, research institutions, and civil society organizations to address AI's global challenges and opportunities. Their website features a wealth of resources, including research, reports, and news on AI's impact on society, ethics, safety, and policy. 

AI Ethics Lab (https://aiethicslab.com/) The AI Ethics Lab is an organization dedicated to integrating ethical considerations into AI research and development. Their website offers various resources, including articles, case studies, and workshops that help researchers, practitioners, and organizations to understand and apply ethical principles in AI projects. 

The Alan Turing Institute (https://www.turing.ac.uk/) The Alan Turing Institute is the UK's national institute for data science and artificial intelligence. The website features many resources, including research papers, articles, and news on AI, data science, and their ethical implications.

Experiments

Hands-on experiments with AI and learning the basics for AI building blocks that require a little bit of coding awareness but are often explained and clearly demonstrated:

Google AI (https://ai.google/) is the company's research hub for artificial intelligence and machine learning. The website features a wealth of information on Google's AI research, projects, and tools. While the focus is primarily on the technical aspects of AI, you can also find resources on AI ethics, fairness, and responsible AI development. 

OpenAI (https://www.openai.com/) OpenAI is a leading research organization focused on developing safe and beneficial AI. Their website offers various resources, including research papers, blog posts, and news on AI developments. 

TensorFlow (https://www.tensorflow.org/) TensorFlow is an open-source machine learning library developed by Google Brain. The website offers comprehensive documentation, tutorials, and guides for beginners and experienced developers. 

These are just introductions and ideas, not anything like an entire course of education or meant to cover more than getting a conversation started. 

It also struck me making these lists that many of the texts and media are over five years old. It's likely indicative that media needs time to become relevant and that more recent items, especially those predicting futures, need time to prove their worth.

I'd be fascinated to read your suggestions for media that help everyone become more comfortable in the Age With AI.

Coping with AI Anxiety, created using Midjourney

Read More
Leadership, Military, Fast Intelligence, AI, Disruption Tony Reeves Leadership, Military, Fast Intelligence, AI, Disruption Tony Reeves

Leaders need AI but does AI need Leaders?

Leaders need AI but does AI need leaders?

An empty factory replaced by automation, produced by Midjourney

A lot of the discussion about AI focuses on how it affects us as individuals in the present. We tend to use a personal and immediate perspective, and often apply a simple criterion: if AI is not impacting me now, then it is nothing to worry about.

We also tend to have a uniform view of war as always being the same and universal, a bloody, violent mess resolved by brave people fighting in the mud. We think that technology can only make war faster, more brutal, and more violent.

Historians argue that warfare was shaped by how we learned to communicate 100,000 years ago and that writing 6,000 years ago made it worse. We have heard military leaders claim that the same values that helped them advance in their careers are the ones we will need in the future.

We seem determined to convince ourselves that warfare will not change.

Yet society is changing, rapidly, and there are three factors that leaders should consider when thinking about whether AI needs leaders:

We need AI to compete and win

We cannot process the amount of data, generate valuable insights, or operate at the speed we need to succeed without AI support for our military services. This is a fact based on similar experiences in other sectors. However, current approaches are planning to add a bit of AI here and there without much careful thought or thorough evaluation. I have written elsewhere about the piecemeal approach to AI. I would also add that industry does not always help by tempting users to look at amazing new tools to buy.

We need AI to win and apply that AI across large military operations areas.

AI is transforming every sector and industry with more horizontal and streamlined organisational structures. AI enables more distributed and collaborative decision-making, faster and easier sharing, and higher potential of individuals and teams. Teams can work more efficiently and quickly with the help of AI and do not require the same amount of managerial oversight and feedback.

Just as robots replaced workers, supervisors, inspectors, managers, and other middle-level roles in manufacturing lines, AI will do the same for organisations that rely on information, data, and insights.

The middle managers are the most vulnerable to AI disruption.

Moreover, future leaders will have very different career paths from our current leaders.

Military leaders tend to follow the footsteps of their predecessors. They are advised to learn from this staff role, grow in this command position, and operate in this context.

One day, they will be promoted, as long as they stick to the way.

However, we see that automation is changing, replacing, reshaping, and limiting those paths. We cannot expect future leaders to cope with very different structures, challenges, and ways of solving complexities without acknowledging these crucial changes. We need to rethink that path and make the adjustments today.

Unless we are in an organisation that is arrogant, slow, resistant to change, reliant on technology to do the same things faster rather than differently, and dependent on hierarchical command and control, and plans for change over years rather than months.

But for the leaders who stay in those organisations, where do they come from, and how do they develop? Successful organisations must design, select, train, and change to offer future leaders valuable opportunities that challenge and enhance their leadership skills, value human creativity, and reward their efforts.

Since 2015, there has been a series of predictions that would show AI is fundamentally changing the world. I have compiled the key steps separately, as only some things can be summarised with a few bullets. Our world continues to progress through those predictions, for good or bad, the main ones being:

  • Demonstrate that AI can achieve better than human skills like translate and image recognition

  • Build the infrastructure that would enable global scale and capacity with cloud storage and compute

  • Construct component building blocks that would enable the adoption of AI across sectors and industries

  • Democratise AI by making it easy to implement and access through packaged capabilities to generate content and output

  • Interface with AI using conversational and intuitive models that empower anyone with access

  • Replace mundane repeatable activities and tasks to enhance human ingenuity

  • Agree that AI is changing society and that it needs international collaboration on its introduction and control

  • Test AI to show that it acts in unpredictable and unintended natures

  • Create common principles and approaches to develop safe and trustworthy AI

  • Transition human activity in key sectors with AI alternatives that reduce costs and increase AI development

  • Adopt AI in high-risk areas like security, justice, and defence to improve performance and reduce (own side) military casualties

  • Use AI to develop and scale future AI performance and adoption within and across roles, functions, and sectors

  • Support humanity as they transition from work that involves mundane, repeatable activities into more creative, insightful activities

  • Increase digital skills to anticipate and adapt to working alongside AI toolsets.

  • Develop international and national planning, funding and support for people who are no longer employed or employable

  • Anticipate highly automatable sectors to help those affected transition to employment elsewhere

  • Encourage that the right mindset about AI is more important for a safe long-term transition than understanding only the technical toolset

  • Plan for a society that enhances human ingenuity with AI that empowers human life with value and worth

In all these cases, we have taken the easy path, taking the parts that reduce costs or deliver immediate gains, ignoring the more complex elements like international agreement, and are yet to consider the consequences in a meaningful, planned, and funded way.

We've stripped out the easy, taken the quick gains, and left future generations to pick up the bill.

This list shows how we are transforming society with the revolution of AI. This revolution also demands a radical change in the aspects of warfare and the military. The change should originate from the militaries themselves, who can harness the advantages of AI, but it will likely come from external sources, such as their new recruits or their enemies.

As an evangelist, I believe that military leaders today have a duty to prepare their command and their successors for an automated future. This is not about accepting the common view that AI will not alter the nature of warfare or that warfare is always the same.

It requires a deeper reflection on how your command could be affected and acting on those opportunities.

Sometimes, spreading this warning feels like Niels Bohr's publication on quantum atomic theory, as it is difficult for society to imagine the inevitable outcomes. Yet the world had thirty years before Oppenheimer applied those theories and started discussing destroying worlds with atomic bombs.

Today, unlike the atomic age, the research time for disruptive AI from theory to deployment is measured in months and not decades

When we ask whether AI needs leaders, we reach a key conclusion: in a world where automation has taken over simple and routine tasks, we still need leadership to tackle the most complex challenges. But how and where can our future leaders develop the skills to meet those demands?

Leadership is about preparing your teams for the future. And that is also where AI needs leadership.

Read More
Leadership, Strategy Tony Reeves Leadership, Strategy Tony Reeves

What does Global Britain Stand and Fall for?

Our approach to national security must explain more: “what does the UK stand for?”

Today's Integrated Review rightly prioritises the changes required across Defence. It emphasises the importance of technology, especially around data and information, highlights the urgent need to transform, and explains how the future battlespace will differ from today.

It also details how "China's growing international stature is by far the most significant geopolitical factor in the world today. The fact that China is an authoritarian state with different values presents challenges for the UK and our allies. China will contribute more to global growth than any other country in the next decade with benefits to the global economy."

The Review highlights Russia's obvious threat, based on its previous aggression in Ukraine, as an "acute threat to our security. Until relations with its government improve, we will actively deter and defend against the full spectrum of threats emanating from Russia."

The Review also emphasises the need for a digital backbone to gain an information advantage in multi-domain operations over our adversaries. What this advantage looks like or how we will achieve it will become clearer shortly.

According to the Review, the future is obvious:

  • more technology,

  • an immediate threat from Russia, and

  • a global shift towards China.

A problem-solving and burden-sharing nation with a global perspective

Most importantly, the Review describes a strategic intent that defines what the UK stands for and the strategic goals we want to achieve. It champions the UK as a global power stretching from Portsmouth to the Pacific. It also stresses the importance of narrative and utilising information for success. 

A successful strategy for a global power cannot define itself purely by reacting to events beyond its control or responding to another's strategic activities. It cannot be reactive to Chinese growth or merely respond to Russian aggression after it happens. The UK cannot hope to benefit from events that it does not anticipate.

Our approach must explain more: "What does the UK stand for?"

Some people may take this strategic vision for granted. Their argument that the logic of Palmerston still applies, "Our interests are eternal and perpetual, and those interests it is our duty to follow."

It may also seem old fashioned to highlight beliefs like democracy, free speech, universal education, open trade, universal rights, or fairness, as the Prime Minister does in the Review. 

Alternatively, it may seem too modern to champion access to information, a free internet without censorship, diversity and inclusion, climate action, or tolerance of different views.

All these beliefs have faced recent domestic challenges inside the UK and US. Their evident and apparent inclusions are strong statements even if they are unlikely to gain the same newspaper headlines as killer robots, cyberwarriors, or foreign threats.

Pragmatists may want to keep all options open all the time using realpolitik, yet even realists require achievable goals.

Understanding what we stand for provides three strategic advantages:

  1. Gains international initiative,

  2. Confirms investment priorities, and

  3. Drives our internal understanding. 

It secures information advantage with a more explicit narrative that takes the initiative from our adversaries. A strategic vision creates clarity, consistency, and trust. Rather than respond to an adversary, our strategic narrative enables us to challenge our adversaries. 

It creates the debate around defence and security priorities before conflict rather than the discussion occurring during conflict. The Review defines a direction of travel for UK Defence and Security and, over the next few weeks, more details will emerge on what that means for sailors, soldiers, and aircrew. The inevitable arguments around force cuts and changes are more straightforward with a clearer strategic vision and narrative.

Most crucially, a more explicit strategic stance clarifies what we will fall for and expect our troops to die for in the future. Our cause may be right, but it is far easier to justify the ultimate cost that our forces may make with it being clear. 

There is much in the Review to praise and to champion. Our direction is now more precise, and our choices more apparent. It acknowledges that Defence is not isolated from the digital revolution transforming our broader society. It clarifies the threats we face and the approaches that adversaries use against us. 

We now need to take this clear national strategic vision and use it to gain the international initiative, prioritise our investments, and deliver on our goals.

Read More