Blog

The Bletchley Declaration: The Imitation Game of a Different Kind

Blog Post

The Bletchley Declaration: The Imitation Game of a Different Kind

Last week, the United Kingdom hosted the AI Safety Summit, the first ever international conference on the safety and regulation of artificial intelligence. With attendees from over 30 governments and international organizations, it represented the beginning of long-term international cooperation on AI safety, with the next summit already planned in 2024 in France.

Alexei Balaganski
Nov 08, 2023

The result of this meeting was “The Bletchley Declaration”, an international agreement signed by 29 countries, including the US, EU, and China, to confirm their shared commitment for future AI developments that “should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible”.

Sounds promising, right? Experts have been talking about the dangers of uncontrolled AI proliferation for years. James Cameron warned us about AI gaining self-awareness forty years ago! Elon Musk has been calling for a pause in AI development, citing grave dangers for our society. On a slightly more serious note, our own KuppingerCole analysts have published numerous articles on AI risks and challenges as well, including those in the field of cybersecurity. And yet, major world superpowers have been engaged in an AI arms race in recent years, and just a week ago, the US president has issued a landmark Executive Order on trustworthy development and use of AI to strengthen his country’s position in this race.

So, why such a sudden demonstration of unity amidst an ongoing global conflict? British prime minister Rishi Sunak called the Bletchley declaration “a landmark achievement… helping ensure the long-term future of our children and grandchildren”, but can we really expect any substance from this claim, or should we treat this paper as a mere symbolic gesture?

After all, the entire summit was full of overt symbolism. Bletchley Park, the location of the event, is, of course, the historic site where during the World War II, top British experts worked to break the secret communications of the Axis powers, especially the notorious German Enigma machine. And of course, this valiant work was led by none other than Alan Turing, the same guy that later proposed the eponymous test to measure a machine’s ability to exhibit intelligent behavior. How cool is that?

Well, I don’t know about you, but I’m well aware of the fact that the actual cracking of the Enigma code was mostly done by Polish mathematicians years before, as well as of the appalling treatment of Turing himself by the British authorities that led to his suicide, and this alone makes the entire symbolism a bit too shallow for me. But even if we ignore the history and only focus on the Turing test itself, we should keep in mind that it was a ground-breaking (and highly controversial) philosophical concept back then, but it bears very little relevance to the modern AI developments that the declaration is supposed to regulate.

Does it really matter if a chatbot can fool you into thinking that you’re communicating with a human? There are much more urgent and practical technological, cultural, and moral problems that need to be addressed soon: for example, who is responsible if a driverless car runs over a pedestrian? Or a multitude of much smaller but equally critically important issues like dealing with bias and prejudice, ethical consequences of AI tools making carrier-defining decisions for humans, deepfakes, etc., etc. Or defending against threat actors wielding their own AI tools…

Surely, all these issues must be addressed, and regulation is the most efficient way to do that. But there is an important distinction between regulating AI usage and its development. The latter is simply meaningless, especially around technologies that can be easily weaponized. No matter how many times governments agree not to create “killer robots”, then will continue developing them anyway because having killer robots at your army’s disposal is much better than not having them, especially if your country is planning another war.

So no, I do not consider this declaration a landmark achievement. It was merely an empty gesture to make British politicians feel a bit more important than they perhaps deserve. An imitation game, if you will. Or, more likely, a cargo cult. And of course, the AI battle between the US and China will continue. And nobody will voluntarily give up productivity tools like ChatGPT and surely, the black market of AI-based threats and exploits will continue to thrive.

But perhaps sometime in the future, when AI safety summits are attended by industry experts and not politicians, we will see more practical developments tailored to specific issues in certain fields or geographies. Until then, we will have to defend against the evil AIs ourselves. And a great place to discuss your immediate risks and challenges is the cyberevolution conference, which opens next week in Frankfurt, Germany, as well as online. Will we see you there?


Alexei Balaganski
KuppingerCole Analysts AG
Roles & Responsibilities at KuppingerCole As the KuppingerCole's CTO, Alexei is in charge for the company's IT needs and operations, as well as of R&D and strategic planning in the evolving technology space. He oversees the development and operations of KuppingerCole's internal IT projects that support all areas of the company's business. As Lead Analyst, Alexei covers a broad range of cybersecurity topics, focusing on such areas as data protection, application security, and security automation among others, publishing research papers, hosting webinars, and appearing at KuppingerCole's conferences. He also provides technical expertise for the company's advisory projects and occasionally supports cybersecurity vendors with their product and market strategies. Background & Education Alexei holds a Master's degree in applied mathematics and computer science, majoring in statistics and computational methods. He has worked in IT for over 25 years, in roles ranging from writing code himself to managing software development projects to designing security architectures. He's been covering cybersecurity market trends and technologies as an analyst since 2012. Areas of coverage Information protection and privacy-enhancing technologies Application security Web and API security Cloud infrastructure and workload security Security analytics and automation Zero Trust architectures AI/ML in cybersecurity and beyond
Read Bio
cyberevolution is NOW LIVE!
Almost Ready to Join the cyberevolution?
Reach out to our team with any remaining questions
Get in touch