Category: Technology

  • Andrew Griffith – 2024 Speech at LEAP ’24

    Andrew Griffith – 2024 Speech at LEAP ’24

    The speech made by Andrew Griffith, the Science Minister, in Riyadh, Saudi Arabia on 4 March 2024.

    Good afternoon.  It’s a pleasure to be here.

    I must start by thanking the patron of this conference and our gracious host, His Excellency Minister AlSwaha, and all of the teams behind this fantastic event.

    This is my second visit to the dynamic City of Riyadh in a few months and it is good to be back.

    The immense science and innovation ambition of the Kingdom in its Vision 2030 is clear and commendable.

    In its four priorities – health and wellbeing, sustainability, energy and economies of the future – Saudi Arabia has shown that it is ready to harness the power of research to tackle some of the greatest shared challenges of our time.

    Projects like NEOM which seeks to harness the power of AI and net zero technologies to establish the most advanced human habitat on Earth  have the potential to drive forward innovation at a scale and pace almost without precedent in human history.

    I am here because I believe that Britain has a vital role to play in that story.

    With four of the world’s top ten universities, we have one of the most formidable research and innovation bases on the planet.

    And according to the World Intellectual Property Organisation, the UK is one of the most innovative economies.

    Like Saudi Arabia, we too, are unapologetically ambitious in capitalising on our strengths to grow our economy and improve lives for people in Britain and around the world.

    Our Science and Tech Framework sets out our ambition to become a science and technology superpower by 2030, with plans to lead in transformative technologies such as artificial intelligence, quantum, and synthetic biology.

    But, even though we are competitive, we are clear that no country can become a science and tech superpower in isolation.

    Just as the history is of humankind becoming more prosperous, living longer, and building great civilisations through free trade, global innovation is not a zero-sum game.

    And so today, my message is this:

    With our shared strengths and our levels of ambition, the UK and the Kingdom of Saudi Arabia can form a formidable research and innovation partnership for the future.

    That’s why I’m delighted to have today signed a Memorandum of Understanding between our two governments.

    This agreement will encourage our worldQ-leading researchers to form productive partnerships in the years to come.

    And I lay down the challenge to British Universities and institutes: come now and seek opportunities to collaborate in the innovative and fast growing Saudi economy.

    Our two countries’ collaborations in this space are young, but we already have over 50 formalised partnerships.

    Over the last decade, they have delivered everything from joint centres of excellence, to research collaborations and visiting researcher programmes.

    Based on scientific publications, I am proud that Britain is already the Kingdom’s third largest collaborator in research and innovation.

    Actions matter, not just words, and that is why this May, I and a very senior delegation of UK businesses and ministers will return to Saudi Arabia in full force to launch our GREAT Futures Campaign – another chance to turbo-charge our innovation agenda.

    Honoured attendees, it is hard to think of a single challenge we face which won’t require innovation.

    The ‘to do’ list for global research and innovation has never run to so many lines.

    The horrifying consequences of anti-microbial resistance or future zoonotic disease pandemics.

    Protecting societies from extremist ideologies and keeping our children safe online.

    The growing challenges of obesity, cancer and dementia – whilst not neglecting the hunger and disease still faced by too many in the developing world.

    And that’s before we contemplate the need for new low carbon energy systems, creative ways to support mass urbanisation or urgent action to protect nature on our congested and fragile planet.

    Global challenges require a global response.

    And each of us in our national governments have a critical role to play.

    From revolutionary stem cell treatment for reversing sight loss to the first transatlantic flight run on 100% sustainable aviation fuel, the UK shows how publicly-funded research working with private capital and business can help transform the world for the better.

    Perhaps there is no better example than the COVID-19 vaccine, which went on to save an estimated 6 million lives and freed billions more across the globe from lockdown.

    The success of the vaccine only happened as the result of the excellence of Britain’s Universities combined with the innovation of our life sciences companies.

    Saudi Arabia is on a similar path.

    Government-led investment – combined with reforms designed to unleash innovation, like the establishment of the Research, Development and Innovation authority – is already delivering impressive results from public health to energy and the environment.

    NEOM and KAUST are employing digital twinning technology to set up the world’s largest coral reef restoration project.

    And like the UK’s BioBank, the Saudi Human Genome project, is capturing the genetic blueprint of Saudi society to tackle disease with personalised medicine.

    Our commitment is strong and unwavering.

    Last month saw UK annual investment in research and development reach its highest ever level.

    The UK will spend £20 billion across the coming financial year.

    As a country That’s one fifth of all government capital expenditure.

    And it adds up to more than £100 billion between now and 2030.

    Now, we are laser-focused on building an innovation ecosystem where it is simple and rewarding to take that world-leading research, and use it to start and scale a successful business in Britain.

    This is not just happening in world-renowned powerhouses like Oxford, Cambridge and London, but in every corner of the country.

    Take Stevenage – I don’t imagine many of you have heard about this town that sits squarely in the middle of England.

    Yet the Bioscience Catalyst science park in Stevenage is the single largest cluster of cell and gene therapy companies in Europe.

    This is no coincidence. Cutting-edge companies from around the world have chosen the UK to start-up and scale-up precisely because of those public-private partnerships I have been talking about.

    From small satellite manufacturing in Glasgow to semiconductors in South Wales, our thriving R&D ecosystem means that there are stories like this up and down the UK.

    In fact, my team have developed a new Cluster Mapping Tool to make it easier for investors, entrepreneurs and government to identify these hot spots of innovation.

    Of course, success will never be exclusively about raw investment.

    We in government also have a responsibility to ensure that regulators can provide innovative businesses with the clarity and certainty that they need to get their products and services to market  quickly.

    I have run businesses myself, and I know how frustrating it can be to have a brilliant idea you are unable to execute, because clunky rules, risk averse regulators or out-of-date laws don’t allow for it.

    Good regulation should encourage innovation, not stifle it, even as we refuse to compromise on safety.

    Even in fast moving technologies, the right balance of regulation can help provide certainty to invest.

    A good example is the UK’s approach to the safety of Frontier AI and last years summit at Bletchley Park.

    It is why we have made delivering an ambitious regulatory reform agenda a top priority in the UK, and a key pillar in our science and tech framework.

    To conclude my remarks:

    We all in this room have an incredible opportunity.

    It’s an exciting time in innovation and an exciting moment to be an innovator.

    That’s true individually but it is also true at the whole economy scale where countries like the Kingdom of Saudi Arabia and the United Kingdom seek to be innovator economies; to grow and to improve the lives of their citizens and make a wider contribution.

    But at a time of shared global challenges, none of us can do it alone.

    We in government must work together – such as in the agreement the UK has today signed with Saudi Arabia – and by doing so we can support bigger, better, bolder science than we could ever do alone – and take on and solve the challenges that will define the future.

    Thank you.

  • Oliver Dowden – 2024 Speech on AI for Public Good

    Oliver Dowden – 2024 Speech on AI for Public Good

    The speech made by Oliver Dowden, the Deputy Prime Minister, at Imperial College on London on 29 February 2024.

    INTRODUCTION

    Ladies and gentlemen…

    The story of technological advancement is one of constant evolution…

    … punctuated by game-changing innovations.

    In my lifetime, the personal computer, the internet, the smart phone, have all made the tech world – and our interaction with it – unrecognisable.

    And they have all – in turn – transformed the way that citizens interact with government, and with public services.

    I believe another such game-changer has arrived…

    … in the form of transformative AI models – including Large Language Models – that enable computers and humans to interact in totally new ways.

    The last fourteen years has been a period of incremental tech improvements.

    The digital interfaces we use are largely recognisable.

    Yes – we have seized new opportunities…

    … such as rolling out gov.uk…

    … and making our services “digital by default”.

    But many of the systems that we use have not kept up with advances…

    … indeed some of them, I’m afraid to say, have not moved on at all.

    Modern AI has the potential to fundamentally change the way that public services operate within just a few short years.

    Indeed, if we are still working off the same systems – and in the same way – in another 14 years… or even frankly another two or three…

    …then we will have failed to embrace the opportunity that now lies before us.

    OPPORTUNITY

    And so, just as the UK is leading the world in the field of AI safety…

    … the Prime Minister has asked me to ensure we are leading the world in the adoption of AI across our public sector.

    The opportunity here is hard to put a value on…

    … although I notice the IPPR have estimated that there is the potential to save £24 billion each year from roll-out of these new technologies.

    So for me it’s only by the rapid adoption of AI that we will drive the savings needed to put us on a sustainable path to a smaller state and better delivery of services.

    The pace of change is such that new opportunities are being uncovered literally on a daily basis, and a new world is opening out before us…

    AI is potentially – and I don’t say this lightly – a ‘silver bullet’…

    … it dangles before us the prospect of increased productivity, vast efficiency savings, and improved services.

    We are already beginning to see glimpses of what these tools have to offer…

    … and so I’d like to paint a brief picture of what the world might look like if we get this right:

    VISION OF SUCCESS

    In healthcare – AI diagnostic tools could transform primary care…

    …with appointments transcribed in real time by ambient AI, then instantly producing prescriptions and referrals…

    …  scans read by AI with far greater accuracy …

    … and medicines tailored to individuals based on their genetics – again using AI.

    In education – … AI could help eliminate excessive paperwork …

    …freeing-up teacher time to focus on what they do best…

    …AI assistants could help teachers to adapt lessons to the specific needs of each pupil…

    … and AI-augmented reality can take interactive learning to another level.

    In crime prevention – AI can direct police to where they are most needed…

    … spot patterns of criminality to discover culprits quicker than ever…

    …and help keep the streets safer for everyone.

    And in all kinds of public sector casework – from immigration processing to benefit claims – AI can be used to summarise complex information…

    … enabling expert case-workers to spend more time actually making decisions.

    I could go on nearly forever to cover all areas of public administration…

    … because there are very few areas of the public sector that don’t have the potential to be enhanced by these tools.

    HOW DO WE GET THERE?

    The question, though, is how do we get there?

    I believe the measures we are bringing forward put in the structures, resources, and mindset…

    … to put the UK on the fastest path to successful adoption of public sector AI.

    Taking advantage of our unique strengths…

    … to revolutionise public services for everyone in the months and years ahead.

    Last year, I established a small team of data scientists, engineers and machine learning experts at the heart of Government – the Incubator for AI – or ‘i.AI’ – under the energetic leadership of Dr Laura Gilbert.

    The idea of these experts was to work with departments to target the biggest opportunities to both save money and deliver better public services.

    The quality of applicants for this program has been phenomenal.

    It is incredibly exciting to see such talented technical people choosing to enter public service…

    … bringing in new ideas to help change the way government delivers services.

    In a few short months this team of just 30 individuals have instigated 10 pilot programs, including…

    • AI to flag fraud and error in pharmacies – that costs the taxpayer £1 billion every single year.
    • A tool that will read and summarise responses to Government consultations, this says something about the scale of Government consultations, but this could save up to £80 million a year in central government alone…
    • And AI algorithms to help move asylum claimants out of hotels more efficiently… helping to save further millions.

    And I can also announce our intention to roll out a new gov.uk chatbot that will provide an interactive interface for people to better navigate Government information and services.

    But this is clearly just the very start…

    …I want to ensure that – where these pilots have proof of concept – we can scale them up as fast as possible…

    i.AI scale-up

    …And so, I can announce today that we will more than double the size of i.AI – to 70 people – recruiting the very best of British talent to drive this work across the public sector.

    This unprecedented influx of cutting-edge expertise into Government will enable us to design, build and – crucially – implement AI swiftly and at scale…

    Of course, there is still a huge role for the private sector – and I welcome the collaboration that we have with so many of the businesses in this room today.

    Nothing will match the strength and depth of the private sector AI innovation that is happening right now – and as all of you know so much of it here in the UK.

    But I believe that by embedding experts at the heart of Government…

    … and upskilling public servants to utilise these tools…

    …we will set ourselves up to deliver the benefits to citizens as quickly – and as efficiently – as possible.

    HORIZONTALS

    The other reason it is so important to have this team at the centre of Government is to ensure that – as AI rolls-out across the public sector – we adhere to the following principles:

    … sharing best practice…

    …deploying individual models to multiple use-cases…

    … finding economies of scale..

    … and, crucially, ensuring interoperability.

    Although I don’t claim for the moment to have the expertise needed to actually build AI models…

    … I can see that – like so many great inventions – there is something beautifully simple about what they are actually doing.

    Indeed, when you boil it down, I think there are four ways AI can be applied to much of public sector activity…

    … spotting patterns of fraud and error;

    … helping the public to navigate services;

    … managing casework;

    … and automating internal processes.

    And so the i.AI team have been looking across these applications with those principles in mind…

    … And I have agreed with the Treasury that we will make all funding for Government AI projects contingent on departments collaborating with i.AI.

    Never again should we be investing money in IT systems without considering how to make them as efficient and interoperable as possible…

    … or without robustly challenging both the timelines and the costs to deliver better value.

    I want to ensure that where we develop a tool for one department – we are considering where else it could be deployed.

    MINISTERIAL FORUM

    And do to facilitate this discussion…

    …to ensure departments are fully integrated into this cross-government effort…

    … we need a regular dialogue between all those involved across government.

    And so I am convening a meeting of the National Science and Technology Council on AI for public sector good …

    … alongside my Co-Chair, Michelle Donelan – our fantastic Secretary of State for Science and Technology.

    Every department has now designated a specific minister to be responsible for AI in their area…

    … and I have asked for them to meet on a regular basis.

    In the Cabinet Office, this work will be led by Minister Burghart…

    … and I want to thank him for the passion, purpose and drive that he has brought to the programme so far, as is often the case when you run a department you get to stand up and make the announcements, but actually Minister Burghart who has actually done the work to bring Government together to do this.

    WIDER PUBLIC SECTOR JOIN-UP

    Of course, central Government can only take this work so far…

    To truly maximise the benefits on offer we need to work with bodies and agencies right across the public sector.

    And so I am delighted to announce today that i.AI will sign a ‘Collaboration Charter’ with NHS England.

    This first-of-a-kind initiative will provide a framework for our experts in the incubator to support the NHS to identify and deploy AI solutions that improve services for patients.

    And I would urge other public sector bodies to consider doing exactly the same thing, I think it can bring enormous benefits

    RESOURCING

    There is no shortage in the Government’s ambition to use AI for public good.

    We have put the expertise and the structures in place…

    … and we are making progress on our early pilot projects…

    …but we also appreciate the investment that will be needed to make good on our ambition to see the UK leading the pack.

    And crucially, investment will be required both to improve services and cut costs…

    But also to pave the way for a leaner public sector.

    MITIGATING RISK

    Through all of this, we are conscious of the need to guard against the risks that have rightly been flagged.

    And, while every effort will be made to eliminate bias, misinformation, and hallucinations…

    … ultimately, we are very clear about the need for human oversight…

    … and a clear distinction between AI suggestions and support on the one hand…

    …and human decision making on the other.

    CONCLUSION

    I believe we can take the worst things about public services…

    …whether that’s the time-wasting, form-filling, pencil-pushing, computer-says-no, the mind-numbing-ness of it…

    … and the kinds of things that make us want to tear our hair out…

    We can take those things and we can turn them around with the help of AI.

    This is not about replacing real people with robots…

    …it is about removing spirit-sapping, time-wasting admin and bureaucracy…

    …freeing public servants to do the important work that they do best…

    … and saving taxpayers billions of pounds in the process.

    We’ve got the political will. We’ve got the world-class civil service. We have the big data. We have the tech companies.

    We are ready.

    So let’s not wait.

    Let’s lead the way…

    …and join me in the AI revolution today.

  • Oliver Dowden – 2024 Statement on Emirates Telecommunications Group Company PJSC

    Oliver Dowden – 2024 Statement on Emirates Telecommunications Group Company PJSC

    The statement made by Oliver Dowden, the Deputy Prime Minister, on 26 January 2024.

    The UK Government has approved the Strategic Relationship Agreement between Vodafone and e&. Using the National Security & Investment Act it has put in place proportionate measures to address any potential national security concerns.

    The UK is rightly a magnet for global investment and, in this spirit, the Act is entirely country-agnostic.

    Where investment might impact the UK’s national security – for example through the acquisition of certain technologies or infrastructure – we will work with investment partners to minimise any risk. As part of our Critical National Infrastructure, telecoms is one such sector. Vodafone is also a particularly important company for the UK Government given its critical functions, including as a key partner in HMG’s Cyber Security Strategy.

  • Oliver Dowden – 2023 Speech on AI in Government

    Oliver Dowden – 2023 Speech on AI in Government

    The speech made by Oliver Dowden, the Deputy Prime Minister, on 20 December 2023.

    It’s great to be here, opening this sell-out event, and that was even before I was confirmed as a speaker.

    It is one of the biggest hands-on technical upskilling events the government has ever hosted.

    A historic event – and this is a historic moment in human history.

    Because artificial intelligence is changing everything – the way we live and the way we work.

    A big focus of the government has been on making sure those technologies are safe.

    Many of you were involved in delivering the world’s first ever AI Safety Summit, which took place at Bletchley Park earlier this month.

    But as well as the huge risks AI poses, there are also enormous opportunities – particularly for us in the public sector to transform productivity.

    As the Chancellor said at the weekend, some public servants waste a whole working day each week on admin.

    I’ve worked in government for many years and I know the frustrations.

    You just want to get on with your work – but it isn’t that easy.

    Stifled by systems.

    Bogged down by bureaucracy.

    Peed off by processes that haven’t changed in decades.

    No wonder, as Jim Hacker says in Yes Minister, “it takes time [for the civil service] to do things quickly” and “it’s more expensive to do things cheaply”.

    Well, all that can change – with the help of AI.

    The potential productivity benefits from applying these technologies to routine tasks across the public sector are estimated to be worth billions.

    The UK is already leading the way: ranked third in the Government AI Readiness Index and attracting £18 billion of private investment since 2016.

    Traditionally, though, the public sector has not been the fastest adopter.

    But with AI it doesn’t have to be that way.

    We have the big data.

    We have the large workforce.

    We have the finest minds and the keenest beans and a government which is one hundred per cent behind this, driven by our Prime Minister.

    So many sectors are embracing the opportunities and the benefits are being felt across society.

    90 per cent of stroke units are now using cutting-edge AI tools.

    Thousands of teachers have signed up to a pilot AI-powered lesson planner and quiz builder.

    We’re bringing that spirit to Whitehall.

    We’ve got civil servants upskilling through this One Big Thing initiative.

    Earlier this month I announced we were trialling AI red boxes to reduce paperwork. An idea that sprung from an Evidence House hackathon which many of you in this room took part in.

    And today I can unveil plans for a new, turbo-charged, ‘Incubator for AI’ team.

    Job adverts go live today – on our new website – ai.gov.uk – to boost this team to an initial 30 people technical AI experts, programme managers, product managers and engagement specialists all working together to rapidly enhance the adoption of AI through a centre of excellence.

    One of their first tasks will be to assess which Government systems have data curated in the right way to take advantage of AI and which systems need updating before that full potential can be harnessed.

    I think of the potential of this work, from correspondence to call handling, from health care to welfare.

    I don’t mean replacing real people with robots, or adding to the frustrations of dealing with government.

    I mean removing the things that annoy people most in their dealings with officialdom – namely the time it takes to do things quickly.

    Imagine that transformation from computer says no, to computer says yes.

    And we can all be part of that – we all deal with digital and data in some way or another.

    So let us, the civil service, be the early adopters.

    Let us be the trailblazers.

    Let Whitehall show the country – and the world – how it’s done.

    The revolution has just begun.

    Thank you.

  • Oliver Dowden – 2023 Speech on Cyber Operations

    Oliver Dowden – 2023 Speech on Cyber Operations

    The speech made by Oliver Dowden, the Deputy Prime Minister, on 7 December 2023.

    Of all the risks that this country faces… there are none that are evolving more rapidly than those in the cyber domain.

    More actors…

    Have more sophisticated tools…

    To target more people…

    Than ever.

    Protecting the public from cyber attack is a matter of the utmost importance.

    Let’s be clear what’s being targeted here.

    The critical services that government delivers:

    Our public finances…

    Our roads and railways…

    Our schools…

    Our health service…

    Our armed forces…

    Even the heart of central government itself.

    Of all the vaults that cyber criminals are desperate to crack into…

    … this one contains some of the greatest rewards.

    That’s why we see so many attempts to breach our digital defences.

    Last year, 40 per cent of the attacks addressed by the National Cyber Security Centre were against the public sector.

    In a world where the new frontline is online…

    …the people in this room are manning the barricades to keep us safe and secure…

    … and for that I want to say thank you.

    Despite the challenges we face, our cyber defences are stronger than ever.

    Since it was published two years ago, the Government Cyber Security Strategy has been a game-changer.

    Work is well underway to ensure that government’s most critical functions are significantly hardened to cyber attack.

    And we have established ambitious targets that will see all government organisations made resilient to known vulnerabilities and common attack methods.

    Through GovAssure – which I launched in April – we have transformed the oversight of governmental cybersecurity…

    And the new ‘Government Cyber Coordination Centre’ – better known as ‘GC3’ is bringing together a community of cyber defenders from across government…

    …sharing best practice…

    … and showing that a “whole of government approach” is not a slogan, it’s a reality.

    Working together with the National Cyber Advisory Board… (which I Chair)…

    …and of course the National Cyber Security Centre.

    All of you play a crucial role in iterating the strategy…

    … and ensuring it is implemented right across Government.

    Your work never stops… because the risk of attack never stops.

    The threats we face are increasing and the nature of those threats is evolving.

    Technologies are developing at an exponential rate…

    …and have lowered the bar for hostile actors – states and criminals.

    The biggest cyber threats are not just to our public services but the democratic means by which we deliver them.

    Some states are likely to be harnessing significantly more sophisticated technology to sow confusion and dissension and chaos in our society.

    Malicious actors continue to target high profile people within the political process.

    This is not an abstract possibility. We have already seen it…

    In Ukraine – with deep-fakes of President Zelensky…

    In the US – where Iranian hackers have been indicted for undermining voter confidence and sowing discord…

    And here in the UK – with our Electoral Commission targeted by a complex cyber attack.

    As I warned at CYBERUK in Belfast in April…

    …the greatest risks still emanate from the “usual suspects”…

    …China, Iran, North Korea, and Russia.

    But they are increasingly using ‘Wagner-style’ sub-state hackers to do their dirty work.

    Today in concert with our Five Eyes and Euro-Atlantic partners….

    I can tell you that a unit within the Russian Federal Security Service, known as Centre 18, has been behind sustained hostile cyber operations…

    …aimed at interfering in parts of the UK’s democratic processes…

    This has included targeting members of parliament…

    …Civil servants, think tanks, journalists, and NGOs…

    …through a group commonly known as Star Blizzard.

    This group, operated by FSB officers, has also selectively leaked and amplified information designed to undermined trust in politics, both in the UK and in like minded states.

    A senior representative of the Russian government has been summoned to the Foreign Office this morning and appropriate sanctions have been levelled.

    Our political processes and institutions will continue to endure in spite of these attacks.

    But they serve to prove that the cyber threat posed by the Russian Intelligence Services is real and serious.

    It is a stark reminder that…

    as we in government develop our capabilities…

    …so do our adversaries, and those who do their bidding.

    We are in a cyberspace race…

    …them – to develop the tools to do us harm…

    …us – to build the defences needed to protect against their attacks.

    Next year, 3 billion people in 40 countries will head to the polls …

    … and it is a fact that hostile state actors will continue to seek to undermine these collective expressions of democracy…

    …because they fear the freedoms they represent.

    We must – all of us – do all we can to resist.

    There are two main ways in which we can get ahead:

    Strengthening our cyber security systems…

    …and improving our skills.

    First, our systems.

    It wasn’t that long ago that the government was still using fax machines.

    I worked for the administration that helped to bring Whitehall into the digital age…

    …and made our services “digital by default”.

    The challenge is to make those digital systems “secure by design”…and to embed effective cyber security practices into our digital delivery.

    That’s why I am announcing today that we will make security everyone’s responsibility…

    …and make “secure by design” mandatory for central government organisations.

    This approach is already inspiring our partners around the world…

    …and, like our earlier digital revolution, is likely to be emulated around the world.

    Your role in embedding this approach at home will be crucial.

    Then there is the question of skills.

    In this room we have a wealth of deep technical expertise…

    …and we have the ability to share and collaborate with our international partners.

    But we need the experts of the future to be coming up, through that pipeline, to meet the challenges of the future.

    In the UK, as around the world, the shortage of cyber skills affects both the public and private sectors.

    It is estimated that we have a shortfall of around 14,000 professionals….

    …and that shortfall is particularly stark in the public sector.

    As one of the largest employers of cyber security experts, the government’s actions can make a real difference to the makeup of the national profession.

    So we have launched apprenticeship and fast stream programmes focused specifically on finding and developing  cyber talent.

    This is the new frontline.

    And we must form a united front…

    …government, business, academia, individuals, all coming together to pre-empt and ward off these risks.

    Not just “whole of government” – but “whole of society”.

    It is what we have that our adversaries and their agents lack: unity.

    And there are huge opportunities in that…

    …particularly for our entrepreneurs and innovators.

    They will develop the defensive technologies that will protect not just this country… but the world.

    Britain has the opportunity to lead … in tech, in AI and in cyber…

    …because the best place in the world to do business must also be the safest place in the world to do business…

    …and together we can make that a reality.

    Thank you.

  • Michelle Donelan – 2023 Speech to the FOSI Annual Conference

    Michelle Donelan – 2023 Speech to the FOSI Annual Conference

    The speech made by Michelle Donelan, the Secretary of State for Science, Innovation, and Technology, on 14 November 2023.

    Hello and thank you for having me here today, it is a pleasure to be in Washington.

    Now from the outset I must confess I have brought a numerous amount of British bugs with me, and so if I end up coughing, spluttering, drying up, please forgive me and bear with me, but I will do my very best throughout the speech.

    And there is a reason that my first speech on the subject of online safety, since the UK’s world leading Online Safety Act passed is taking place here in the United States. Because the UK and the USA obviously share a special relationship that is fundamentally about our values.

    The Online Safety Act – which I want to talk about for a bit today – is about reaffirming our longstanding values and principles and extending them to the online world. Empowering adults, protecting free expression, standing up for the rule of law, and most importantly, protecting our children.

    These are the values that Britain has pioneered for centuries, and they are also the values that made the extraordinary story of the United States possible.

    In the most recent chapter of that story, the transformational power of the internet has created the online world that is increasingly, seamlessly intertwined with the real world. But the values that made our free, safe, liberal societies possible have not been reflected online – especially when it comes to social media.

    The guardrails, customs and rules that we have taken for granted offline have, in the last two decades, become noticeable in their absence online. FOSI have been an important part of the conversation to identify this problem, and I want to extend my thanks to you for all the tireless work that you’ve done on this incredibly important agenda.

    And thanks to the work of campaigners here and in the UK, lawmakers from Washington to Westminster have taken the issue of online safety increasingly seriously, especially when it comes to the protection of our children.

    And today I want to share with you how we rose to the challenge of online safety in the UK – what we did, how we did it, and I guess why we did it as well.

    I think the why of that equation is the best place to start, given FOSI’s role in helping to answer that question over the years. Now, my department was created back in February to seize the opportunities of our digital age. Not just the opportunities that are in front of our generation now, but the opportunities that will potentially shape the futures of our children and our grandchildren.

    My 6-month-old son will grow up thinking nothing of his ability to communicate with people thousands of miles away and, I hope, he’s going to go on and do much more. Sharing research with his school friends potentially, learning new languages about countries that he might not have even visited, and gaining new skills that will enable him to fully take advantage of his talents when he grows up. Of course, if you ask my husband, he will tell you he hopes that those talents will lead him to the Premier League football.

    But we cannot afford to ignore the dangers that our children increasingly face online and I do think it is a sobering fact that children nowadays are just a few clicks away from entering adulthood, whether that’s opening a laptop or picking up an iPad.

    And despite the voluntary efforts of companies and the incredible work of campaigners, the stats tell us unequivocally that voluntary efforts are simply not enough.

    Did you know that the average age that a child sees pornography is 13? When I first heard that, it really, really struck me as something that needs to be dealt with. And a staggering 81% of 12–15-year-olds have reported coming across inappropriate content when surfing the web, including sites promoting suicide and self-harm.

    Now, regardless of ideology or political party, I don’t think anyone can look at what’s happening to our children and suggest that a hands-off approach that has dominated so far is working.I believe that we have a responsibility and in fact a duty to act when the most vulnerable in our society are under an increasing threat – especially our children.

    So, when I stood in the House of Commons during the Bill’s passage, I said enough is enough – and I meant it.

    Now, I defy any person who says it cannot or should not be done – as adults it is our fundamental duty to protect children and be that shield for them against those who wish to do them harm. And that is why in the UK, I have been on somewhat of a mission to shield our children through the Online Safety Act.

    And we started with the obvious – applying the basic common-sense principles of what is illegal offline, should actually be illegal online. Quite simply if it is illegal in the streets – it should be illegal in the tweets.

    No longer will tech companies be able to run Wild West platforms where they can turn a blind eye to things like terrorism and child abuse. The days of platforms filled with underage users, when even adverts are tailored to those underage users, are now over.

    If you host content only suitable for adults, then you must use highly effective age assurance tools to prevent children from getting access.

    We can and we will prevent children from seeing content that they can never unsee – pornography, self harm, serious violence, eating disorder material – no child in Britain will have to grow up being exposed to that in the future and I think that that is quite remarkable. Because when we consider the impact that that content is having on our children, it is quite frankly horrific.

    Of course, we know that most websites and all the major social media platforms already have some policies in place to safeguard children – in a few days I am travelling to Silicon Valley to meet many of them, and what I will be telling them, is that the Online Safety Act is less about companies doing what the Government is asking them to do – it is about the companies doing what their users are asking them to do.

    Most companies actually do have robust and detailed terms of service. In fact, all of the 10 largest social media platforms in the world ban sexism, they ban racism, homophobia, and just about every other form of illegal abuse imaginable.

    Yet these terms are worthless unless they are enforced – and too often, they are not consistently enforced.

    So, the legislation that we have produced in the UK will mean that social media platforms will be required to uphold their own terms and conditions.

    For the first time ever, users in Britain can sign up to platforms knowing that the terms they agree with will actually be upheld, and that the platforms will face eye-watering fines if they fail to do so.

    But do not make the mistake of thinking that this Act is anti-business. Far from it, we view the Online Safety Act as a chance to harness the good that social media can do whilst tackling the bad, and because we believe in proportionality and innovation, we have not been prescriptive in how social media giants and messaging platforms should go about complying.

    I believe it’s never the role of the Government to dictate to business which technologies they use. Our approach has remained ‘tech neutral’ and business friendly.

    To borrow an American phrase, we are simply ensuring that they step up to the plate and to use their own vast resources and expertise to provide the best possible protections for children.

    And I know this matters on the other side of the Atlantic too, because the online world does not respect borders, and those who wish to do our children harm should not be undeterred by this sense that they can get away with it in some countries and not in others, or that they should be able to use this to their advantage.

    And that is why in the UK, we are taking steps to enable our online safety regulator, Ofcom, to share information with regulators overseas including here.

    These powers will complement existing initiatives, such as the Global Online Safety Regulators Network. A vital programme – which of course was launched at the FOSI conference last year – bringing together like-minded regulators to promote and protect human rights.

    And this momentum has been backed up by government action too. I am talking about the US Administration establishing an inter-agency Kids Online Health and Safety Task Force, and both of these are very welcome signs of the increasing unity between the UK and the US on this important agenda.

    Many of the aims perfectly complement what we are trying to do in the UK and I am keen that both our governments continue to work together.

    And while protecting children has remained our priority throughout the legislative process, we have been incredibly innovative with the way that we help protect adults online too. I believe when it comes to adults, we must take a different approach to the one that we take for children.

    Liberty and free expression are the cornerstones of the UK’s uncodified constitution, and of course at the heart of the US Constitution and Bill of Rights. So when thinking about protecting adults online, we knew we could not compromise these fundamental principles.

    In fact, I believe that the Act would have to actively promote and protect freedom and liberty for adults if it were to be successful in the long term, and that’s exactly what we did.

    So rather than tell adults what legal content they can and cannot see, we instead decided to empower adults with freedom and choice – on many platforms for the very first time. Known as user empowerment tools, the Bill requires companies to finally give adults a direct choice over the types of content they see and engage with.

    Taking the power out of the hands of unaccountable algorithms and placing it back in the hands of each and every individual user. Where an adult does not want to see certain types of legal content, they will have the power to toggle that content on and off as they choose, and in some cases, filter out keywords.

    Choice, freedom, and control for adults, while robustly protecting children at the same time. Combined together, these form the framework that we believe will become the global norm for online safety in the decades ahead.

    Now, just finally, while the glow of our successful Global AI Safety Summit is still bright, I want to touch briefly on the challenges of AI when it comes to online safety.

    We are discussing ‘New Frontiers in Online Safety’ today – and it is impossible to do that without talking about the technology that will define this century.

    Although AI brings enormous opportunities – from combating climate change to discovering life-saving drugs, to obviously helping our public services, it does also bring grave risk too – including on online safety, and we saw that just the other month in southern Spain, where fake, nude images of real girls had been created using AI – a case that shocked us all.

    And recently in Britain, fake AI-generated audio also targeted the leader of the opposition and spread rapidly on social media before being promptly debunked. So, we must be clear about the serious threat AI presents to our societies, from our children’s safety to our democratic processes and the integrity of our elections, something that we both care acutely about as we march towards our elections.

    And that is why we hosted the first ever AI Safety Summit earlier this month at Bletchley Park, where 28 countries and the European Union were represented, representing the vast majority of the world’s population. And we signed an unprecedented agreement known as the Bletchley Declaration.

    Despite some claiming that such a declaration would be rejected by many countries in attendance, we actually agreed that for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and of course responsible.

    But I have been clear that when it comes to online safety, especially for our children, we cannot afford to take our eye off the ball in the decade to come.

    And the historic Bletchley Declaration lays out a pathway for countries to follow together that will ultimately lead to a safer online world, but it is up to us all to ensure that we continue down that pathway.

    And In support of that mission, I have directed the UK’s Frontier AI Taskforce to rapidly evolve into a new AI Safety Institute, giving our best and brightest minds a key role to really delve into the risks that AI presents as well as the pre-deployment testing. And of course, it will partner with the US’s own Safety Institute which the Vice President announced in London during the summit.

    We must also recognise AI can of course be part of the solution to many of the problems we are discussing today, as well – from detecting and moderating harmful content to proactively mitigating potential risks like the generation and dissemination of deep fakes.

    FOSI’s new report, published today – does provide important insights on the early use of generative AI tools by parents and teens, and how it will impact children’s safety and privacy online.I will be taking these findings back to my officials in London and ensuring that we deepen the already close relationship between our two countries when it comes to protecting our children.

    Now, while I hope my speech today has been somewhat of a soft-sell if you like for the online safety framework that we have created in the UK, I actually don’t think our approach really requires salesmanship to the rest of the world. Because even before our Online Safety Act became law, companies began implementing key parts of its provisions and adapting their behaviour.

    Many social media platforms now allow keyword filtering, some have started exploring and piloting age assurance methods, and many are proactively cleaning up illegal content through new innovative techniques.

    So, if there is one thing I want to say to American policymakers who want to make a real difference for children and adults online, it’s be ambitious, put children first, front and centre, and above all, defend the values that you would expect to see on the streets as ferociously online as you would in person.

    As the online world and the offline world merge ever closer together, now is the time to stand firm and uphold the values that we share, and the values that got us here in the first place.

    Thank you.

  • Baroness Neville-Rolfe – 2023 Speech at the Government Security Conference

    Baroness Neville-Rolfe – 2023 Speech at the Government Security Conference

    The speech made by Baroness Neville-Rolfe, a Minister of State at the Cabinet Office, on 1 November 2023.

    Thank you, Vincent, for that kind welcome – and good evening, everyone.

    Thank you all for coming and to the Government Security Group in particular for your offer of hospitality in the days of work ahead.

    And I will start with a question.

    Could there be a more important time for a conference on security?

    We meet at a very difficult time. The world is getting darker and we face enormous threats to world security.

    The complexities of security are more evident in the last few months than ever before…

    …War in Ukraine, conflict in Israel and Palestine and the constant drip drip drip of cybercrime and fraud – could – if we let it – become a deluge.

    But it’s not just criminals we need to concern ourselves with…

    …whole countries are turning to their computers to commit crime. It is no longer the loner in their bedroom planning cyberattacks…

    …it’s buildings of people, sanctioned by their state, challenging the basic conditions for an open, stable and peaceful international order which everyone in this room will support.

    We explained the difficulties in our Integrated Review Refresh in March and called out ways in which the world was getting darker.

    Moreover, as the world turns, our security needs will become more complex…

    …and this complexity is being demonstrated in Bletchley Park right now, as the Prime Minister hosts the first ever Global AI Safety Summit…

    …countries from across the world – and tech leaders and innovators – all working together with one goal…

    …which is to ensure that the next tech frontier is as safe and secure as possible.

    Today’s session at our conference is about how collaboration will strengthen the security of our governments…

    …governments that are threatened by increasingly skilled adversaries…

    …adversaries who are determined to exploit our large quantities of data, and hold to ransom our online public services.

    Today, I want to outline how the UK Government is staying secure…

    …and how we are collaborating across the world to improve international security. I have already mentioned cybercrime…

    …soon enough, this type of crime will become so commonplace that it will simply be known as ‘crime’.

    I am clear that the digital world is one of the battlegrounds of the future…

    …where frontlines are not defined by physical borders. This is a big change.

    Hybrid methods of warfare have long been used to destabilise adversaries, but cybersecurity threats are evolving at an alarming pace. Malicious actors exploit vulnerabilities in our interconnected systems.

    A few years ago, WannaCry wreaked havoc in the UK National Health system. Today 8 out of 10 ransomware attacks come from Russian speaking sources.

    However, I believe that the UK is prepared to tackle these challenges.

    Our National Cyber Security Strategy outlines how we will bolster Government digital infrastructure to withstand attacks…

    …we are training businesses and public services about how to remain resilient against digital crime…

    …and as the third largest exporter of cybersecurity services globally, we are sharing our expertise with the world.

    But as criminals adapt their methods, we too must adapt.

    Take the fight against public sector fraud, which transcends national borders and threatens our national security.

    Our leadership in the UK of the International Public Sector Fraud Forum is crucial here.

    Through this dialogue both the UK and our partners are alive to the developing issues…

    …and coming up with ways to fight the fraudsters, wherever they are. I was fortunate to attend their forum earlier this year…

    …and I was struck, very struck, by the strength of our relationship with our Five-Eyes partners…

    …and how that partnership is enhancing fraud prevention, improving investigative techniques, and leading to a better understanding of different types of attacks, including ransomware.

    In fact, ransomware featured strongly in my discussions at Singapore International Cyber Week a fortnight ago.

    It was clear to me that Singapore is a good place for these discussions. It sits at the very heart of the Indo Pacific…

    …which has become a greater focus for British foreign and security policy for a number of reasons.

    It was a successful visit for us all…

    …one which builds on our recent achievements in the region, including the AUKUS agreement, obtaining Dialogue Status with ASEAN, several trade deals and a recent UK Singapore Strategic Partnership agreed by our Prime Minister…

    … a partnership built on how like-minded we are when it comes to cybersecurity, and our joint leadership in advanced artificial intelligence, on which we are spending a lot of time on this week.

    I am pleased to say that we are building on this national and international work.

    This year, we announced a new Integrated Security Fund – replacing the Conflict Stability and Security Fund, which was much loved…

    …which will help keep the UK safe and address global sources of volatility and insecurity.

    With a budget of almost £1 billion, it will, for example, help develop regional cyber strategies and training…

    …both essential components which will help our allies deter cyberattacks on their national infrastructure.

    I mentioned ASEAN, and this fund is delivering technical and policy capacity building in ASEAN

    …but the Fund also supports projects that assist Ukraine and counter Russian disinformation.

    But it’s not enough to bolster projects that already exist…

    …we have to also invest in the skills, skills for the future, so the projects of the future – ones we can’t even comprehend yet – can be created and maintained.

    It is clear that the UK can be a leader in digital skills…

    We are the European leaders in Fintech, with one-thousand-six-hundred firms based here…

    …our telecoms, our computer and information services exports are valued at over thirty-eight-billion pounds…

    …and with 1% of the world’s population – so we’re not that huge – we have built the 3rd largest AI sector in the world.

    Despite this, and I’m sure this is agreed, we must do more globally to foster data and digital skills, and in particular our cyber talent pipeline…

    …and the professionalism of cyber internationally to match our professional success in law and accountancy.

    But, as the threats we face are increasingly global in nature, we have to work with global partners to confront them…

    …and that is why I was so pleased to announce – as part of my visit to Singapore – a new Women in Cyber Network across South East Asia…

    …which will run women-led projects that address regionally specific cybersecurity challenges, with the support of UK best practice, and I was delighted to discover that so many colleagues from the US delegation came from the female side.

    This focus on skills is no more needed than in the area of supply chains.

    Strong and resilient supply chains are of fundamental importance to our economic and national security…

    …and it is prudent to set common standards for suppliers, to support a secure and prosperous international order.

    It has been wonderful to see the Five Eyes’ global leadership flourish in areas such as software security and supplier assurance…

    …but it behoves us to do more and faster.

    Because if we don’t, our adversaries will exploit our open economies to use ownership models and state-backed companies against us…

    …with Huawei and HikVision being prime examples.

    Our new UK Procurement Act – which received Royal assent last week – will help tackle this specific threat.

    It will enable us to reject bids from any Government supplier that poses a threat to national security…

    …and we are setting up a new National Security Unit for Procurement in the Cabinet Office, which will advise the Government on future priorities.

    We are going even further to prevent interference in our political infrastructure through our Defending Democracy Taskforce – of which I am a member – under the leadership of Tom Tugendhat, the security minister at the Home Office.

    It is working across government to protect the integrity of our democracy from threats of foreign interference.

    This is now teeing up work to protect our representatives and voting systems from hostile attacks at our next election.

    Here, too, the importance of collaboration across governments to reduce these and other security risks cannot be overstated. After all, next year is an election year in the EU and US.

    Ladies and gentlemen, it is clear that – in our interconnected world – our security is a shared responsibility.

    What we can achieve together is an all-round ecosystem of security built on our world-class foundations of education, expertise, technology and capability.

    Yes, our security needs are more complex than they used to be, but in the face of that complexity we must remain committed to collaboration.

    Collaboration on our shared security will help us overcome fraudsters, criminals, bandit states – and indeed anyone who wants to undermine the strength of our partnerships for their own gains.

    If we hold our resolve, it is clear to me they will not win…

    …and through our partnerships, we will help build a stronger, more resilient and more secure world.

    Thank you for listening.

  • Rishi Sunak – 2023 Speech on AI Safety Summit

    Rishi Sunak – 2023 Speech on AI Safety Summit

    The speech made by Rishi Sunak, the Prime Minister, at Bletchley Park on 2 November 2023.

    It was here at Bletchley Park where codebreakers including the British genius Alan Turing cracked the Enigma cipher…

    …and where we used the world’s first electronic computer.

    Breakthroughs which changed the possibilities for humanity.

    So there could be nowhere more fitting for the world to come together…

    …to seize the opportunities of the greatest breakthrough of our own time….

    …while giving people the peace of mind that we will keep them safe.

    I truly believe there is nothing in our foreseeable future that will be more transformative for our economies, our societies and all our lives…

    ….than the development of technologies like Artificial Intelligence.

    But as with every wave of new technology, it also brings new fears and new dangers.

    So no matter how difficult it may be…

    ….it is the right and responsible long-term decision for leaders to address them.

    That is why I called this Summit….

    …and I want to pay tribute to everyone who has joined us, and the spirit in which they have done so.

    For the first time ever, we have brought together CEOs of world-leading AI companies….

    … with countries most advanced in using it….

    …and representatives from across academia and civil society.

    And while this was only the beginning of the conversation,

    I believe the achievements of this summit will tip the balance in favour of humanity.

    Because they show we have both the political will and the capability to control this technology and secure its benefits for the long-term.

    And we’ve achieved this in four specific ways.

    Until this week, the world did not even have a shared understanding of the risks.

    So our first step was to have open and inclusive conversation to seek that shared understanding.

    We analysed the latest available evidence on everything from social harms like bias and misinformation…

    …to the risks of misuse by bad actors…

    …through to the most extreme risks of even losing control of AI completely.

    And yesterday, we agreed and published the first ever international statement about the nature of all those risks.

    It was signed by every single nation represented at this summit covering all continents across the globe…

    …and including the US and China.

    Some said, we shouldn’t even invite China…

    ….others that we could never get an agreement with them.

    Both were wrong.

    A serious strategy for AI safety has to begin with engaging all the world’s leading AI powers.

    And all of them have signed the Bletchley Park Communique.

    Second, we must ensure that our shared understanding keeps pace with the rapid deployment and development of AI.

    That’s why, last week I proposed a truly global expert panel to publish a State of AI Science report.

    Today, at this summit, the whole international community has agreed.

    This idea is inspired by the way the Intergovernmental Panel on Climate Change was set up to reach international science consensus.

    With the support of the UN Secretary General…

    …every country has committed to nominate experts.

    And I’m delighted to announce that Turing Prize Winner and ‘godfather of AI’ Yoshua Bengio…

    …has agreed to chair the production of the inaugural report.

    Third, until now the only people testing the safety of new AI models…

    …have been the very companies developing it.

    That must change.

    So building on the G7 Hiroshima process and the Global Partnership on AI…

    …like-minded governments and AI companies have today reached a landmark agreement.

    We will work together on testing the safety of new AI models before they are released.

    This partnership is based around a series of principles which set out the responsibilities we share.

    And it’s made possible by the decision that I have taken – along with Vice President Kamala Harris…

    ….for the British and American governments to establish world-leading AI Safety Institutes…

    …with the public sector capability to test the most advanced frontier models.

    In that spirit I very much welcome the agreement of the companies here today to deepen the privileged access that the UK has to their models.

    Drawing on the expertise of some of the most respected and knowledgeable AI experts in the world…

    …our Safety Institute will work to build our evaluations process in time to assess the next generation of models before they are deployed next year.

    Finally, fulfilling the vision we have set to keep AI safe is not the work of a single summit.

    The UK is proud to have brought the world together and hosted the first summit.

    But it requires an ongoing international process…

    …to stay ahead of the curve on the science…

    …and see through all the collaboration we have begun today.

    So we have agreed that Bletchley Park should be the first of a series of international safety summits…

    …with both Korea and France agreeing to host further summits next year.

    The late Sir Stephen Hawking once said that –

    “AI is likely to be the best or worst thing to happen to humanity.”

    If we can sustain the collaboration that we have fostered over these last two days…

    …I profoundly believe that we can make it the best.

    Because safely harnessing this technology could eclipse anything we have ever known.

    And if in time history proves that today we began to seize that prize…

    …then we will have a written a new chapter worthy of its place in the story of Bletchley Park…

    …and more importantly, bequeathed an extraordinary legacy of hope and opportunity for our children and the generations to come.

  • Michelle Donelan – 2023 Speech at the AI Safety Summit

    Michelle Donelan – 2023 Speech at the AI Safety Summit

    The speech made by Michelle Donelan, the Secretary of State for Science, Innovation and Technology, in Milton Keynes on 1 November 2023.

    Good morning, everybody.

    It is my privilege to welcome you all to the first ever global summit on Frontier AI safety.

    During a time of global conflict eight decades ago, these grounds here in Bletchley Park were the backdrop to a gathering of the United Kingdom’s best scientific minds, who mobilized technological advances in service of their country and their values.

    Today we have invited you here to address a sociotechnical challenge that transcends national boundaries, and which compels us to work together in service of shared security and also shared prosperity.

    Our task is as simple as it is profound: to develop artificial intelligence as a force for good.

    The release of ChatGPT, not even a year ago, was a Sputnik moment in humanity’s history.

    We were surprised by this progress — and we now see accelerating investment into and adoption of AI systems at the frontier, making them increasingly powerful and consequential to our lives.

    These systems could free people everywhere from tedious work and amplify our creative abilities.

    They could help our scientists unlock bold new discoveries, opening the door to a world potentially without diseases like cancer and with access to near-limitless clean energy.

    But they could also further concentrate unaccountable power into the hands of a few, or be maliciously used to undermine societal trust, erode public safety, or threaten international security.

    However, there is a significant debate that is very robust…and I am sure it’s going to be very robust with the attendees over the next two days.

    Just about whether these risks will materialise.

    How they will materialise.

    And, potentially, when they will materialise.

    Regardless, I believe we in this room have a responsibility to ensure that they never do.

    Together, we have the resources and the mandate to uphold humanity’s safety and security, by creating the right guardrails and governance for the safe development and deployment of frontier AI systems.

    But this cannot be left to chance, neglect, or to private actors alone.

    And if we get this right – the coming years could be what the computing pioneer J.C.R. Licklider foresaw as “intellectually the most creative and exciting in the history of humankind.”

    This is what we are here to discuss honestly and candidly together at this Summit.

    Sputnik set off a global era of advances in science and engineering that spawned new technologies, institutions, and visions, and led humanity to the moon.

    We, the architects of this AI era — policymakers, civil society, scientists, and innovators — must be proactive, not reactive, in steering this technology towards the collective good.

    We must always remember that AI is not some natural phenomenon that is happening to us, but it is a product of human creation that we have the power to shape and direct.

    And today we will help define the trajectory of this technology, to ensure public safety and that humanity flourishes in the years to come.

    We will work through four themes of risks in our morning sessions, which will include demonstrations by researchers from the UK’s Frontier AI Taskforce.

    Risks to global safety and security…

    … Risks from unpredictable advances,

    … from loss of control,

    … and from the integration of this technology within our societies.

    Now, some of these risks do already manifest as harms to people today and are exacerbated by advances at the frontier.

    The existence of other risks is more contentious and polarizing.

    But in the words of mathematician I.J. Good, a codebreaker colleague of Turing himself here at Bletchley Park, “It is sometimes worthwhile to take science fiction seriously.”

    Today, is an opportunity to move the discussion forward from the speculative and philosophical further towards the scientific and the empirical.

    Delegations and leaders from countries in attendance have already done so much work in advance of arriving…

    …across a diverse geopolitical and geographical spectrum to agree the world’s first ever international statement on frontier AI – the Bletchley Declaration on AI Safety.

    Published this morning, the Declaration is a landmark achievement and lays the foundations for today’s discussions.

    It commits us to deepening our understanding of the emerging risks of frontier AI.

    It affirms the need to address these risks – as the only way to safely unlock extraordinary opportunities.

    And it emphasises the critical importance of nation states, developers and civil society, in working together on our shared mission to deliver AI safety.

    But we must not remain comfortable with this Overton window.

    We each have a role to play in pushing the boundaries of what is actually possible.

    And that is what this afternoon will be all about, to discuss what actions different communities will need to take next, and to bring out diverse views, to open up fresh ideas and challenge them.

    For developers to discuss emerging risk management processes for AI safety, such as responsible, risk-informed capability scaling.

    For national and international policymakers to discuss pathways to regulation that preserve innovation and protect global stability.

    For scientists and researchers to discuss the sociotechnical nature of [safety], and approaches to better evaluate the risks.

    These discussions will set the tone of the Chair’s summary which will be published tomorrow. They will guide our collective actions in the coming year.

    And this will lead up to the next summit, that I am delighted to share with you today will be hosted by the Republic of Korea in six months’ time. And then by France in one year’s time.

    These outputs and this forward process must be held to a high standard, commensurate with the scale of the challenge at hand.

    We have successfully addressed societal-scale risks in the past.

    In fact, within just two years of the discovery of the hole in the Antarctic ozone layer, governments were able to work together to ratify the Montreal Protocol, and then change the behaviour of private actors to effectively tackle an existential problem.

    We all now look back upon that with admiration and respect.

    But for the challenges posed by frontier AI, how will future generations judge our actions here today?

    Will we have done enough to protect them?

    Will we have done enough to develop our understanding to mitigate the risks?

    Will we have done enough to ensure their access to the huge upsides of this technology?

    This is no time to bury our heads in the sand. And I believe that we don’t just have a responsibility, we also have a duty to act – and act now.

    So, your presence here today shows that these are challenges we are all ready to meet head on.

    The fruits of this summit must be clear-eyed understanding,  routes to collaboration, and bold actions to realise AI’s benefits whilst mitigating the risks.

    So, I’ll end my remarks by taking us back to the beginning.

    73 years ago, Alan Turing dared to ask if computers could one day think.

    From his vantage point at the dawn of the field, he observed that “we can only see a short distance ahead, but we can see plenty there that needs to be done.”

    Today we can indeed see a little further, and there is a great deal that needs to be done.

    So, ladies and gentlemen, let’s get to work.

  • Michelle Donelan – 2023 Speech at the Guildhall on AI

    Michelle Donelan – 2023 Speech at the Guildhall on AI

    The speech made by Michelle Donelan, the Secretary of State for Science, Innovation and Technology, on 30 October 2023.

    Thank you – it is a pleasure to join you all this evening.

    We have some of the most exciting and innovative thinkers in the world of AI and beyond around the room tonight.

    And of course we are immensely grateful to the City of London for kindly hosting us in this fantastic venue this evening.

    But for our City of London friends here tonight who were hoping for a night off from the numbers and the balance sheets I am afraid you are going to have to wait a bit longer because the UK’s AI balance sheet tells such an extraordinary story that can’t be ignored.

    With 1% of the world’s population, we have built the 3rd largest AI sector in the world.

    We have rocketed ourselves to a 688% increase in AI companies basing themselves here in less than a decade.

    UK AI scaleups are raising almost double that of France, Germany and the rest of Europe combined.

    And more money is invested into AI safety here than in any other country in the world.

    By the end of the decade – our AI sector will be worth half a trillion dollars.

    By 2035, it is predicted to be double that. A trillion-dollar AI sector here in the UK.

    For context, that is equal to the value of our entire tech sector today.

    But as the numerous AI startups and scaleups around the room tonight will know, the numbers only tell part of the story.

    The true value of course is the 700,000 hours of time saved for doctors in hospitals and teachers in our schools.

    On our roads, AI models are piloting a new age of electric, self-driving cars which may one day eliminate road death.

    And in some of our classrooms, AI is instantly translating lessons into any language – including Ukrainian for our refugees who have recently settled here.

    But we are only just scratching the surface.  We stand at a pivotal juncture in human history.

    What Alan Turing predicted many decades ago is now coming to fruition.

    Machines are on the cusp of matching humans on equal terms in a range of intellectual domains – from mathematics to visual arts through to fundamental science.

    As Turing foresaw, this progress has not come without opposition.

    Yet the potential for good is limitless if we forge a thoughtful path ahead.

    What could the future really look like?

    The pioneering American computer scientist J.C.R. Licklider envisioned a symbiotic partnership between humans and machines.

    Licklider predicted this could lead to the most “intellectually creative and exciting” period in human history.  But to get there, we must be transparent with the public.

    And we need to show beyond doubt that we are tackling these risks head-on.

    That is why, last week we became the first country in the entire world to communicate to its citizens a clear explanation of what the risks at the frontier of AI could be.

    This drew upon genuine world-leading expertise, including from many of you in this room, and which will lead the conversation not just at home but across the globe.

    Because science fiction is no longer fiction. Science fiction is now science reality.

    Just a few years ago, the most advanced AI systems could barely write coherent sentences.

    Now they are writing poetry, helping doctors detect cancer and generating photorealistic images in a split second.

    But with these incredible advances, come alongside risks.

    And we refuse to bury our heads in the sand.

    We cannot ignore or dismiss the countless experts who tell us plain and simple that there are risks of humans losing control, that some model outputs could become completely unpredictable and that the societal impacts of AI advances could seriously disrupt safety and security here at home.

    The Summit will be a moment where we move this discussion forward from the speculative and philosophical. To the scientific and empirical.

    AI is not some phenomenon that is happening to us, it is a force we have the power to shape and direct.

    I believe we have a responsibility to act now.

    That is why, since I was first appointed Secretary of State I have sought to grip these issues with every tool at my department’s disposal.

    Through our Frontier AI Taskforce – chaired by leading tech entrepreneur Ian Hogarth – we have built an engine of AI experts to help us tackle these risks head-on.

    We have brought in some of the best and brightest talent in the world.

    From civil society such as the Lovelace Institute and the Centre for Long-Term Resilience, to academics from our leading universities, to researchers from industry leaders.

    Just as the Covid Vaccine Taskforce made us one of the first countries in the world to roll out a working Covid vaccine, this taskforce is making the UK the strongest and most agile country in the world when it comes to AI safety.

    In recent months, our taskforce has recruited renowned experts to guide its work including one of the Godfathers of AI, Yoshua Bengio and GCHQ Director Anne Keast-Butler.

    And it has partnered with leading technical organisations including ARC Evals and the Centre for AI Safety to better understand the risks of frontier AI systems.

    We now want to turbocharge this momentum. To fulfil our pledge to become the intellectual and geographical home of AI.

    Which is why the Prime Minister announced just last week, that the next step in this journey will be turning our taskforce into a new AI Safety Institute based right here in the UK.

    This Institute will lead a global effort in understanding the risks we’ve publicly talked about and stopping them before they actually pose risk.

    It will also carry out research into new safety methods so we can get ahead of the curve and ensure developers are using the right tools at the right time to manage risks.

    The work and findings of this institute will shape policy not just domestically but internationally too – helping developers and partner governments innovate safely and collaboratively.

    This is not just the right approach I would argue it is the only approach.  AI knows no geographical boundaries. The risks cut across borders, cultures and societies across the globe.

    That is why the Summit must not be seen as the end of a journey, nor as a blunt tool to fix the problem in one swoop.

    As AI evolves over time, our collective response must evolve too.

    We have to distinguish between the high risk work at the frontier of AI, and the vast majority of companies whose development is much lower risk.

    A one-size-fits-all system that ignores these important nuances will be destined to fail, and will stop us reaping the enormous benefit for our society that so many of you here tonight represent.

    Making that 0.1% at the frontier safer will benefit both them and the remaining 99.9% of the sector – allowing us to improve consumer confidence and adoption across society.

    Because we should be unapologetically pro-innovation, pro-business, and pro-safety. We must not pull up the drawbridge to innovation.

    Our approach to AI will be the building blocks for creating a legacy for generations to come.

    Indeed, I am delighted to announce that after the curtain falls on our global AI Safety Summit, Bletchley Park will get its first-ever, permanent AI summit exhibition.

    What happened at Bletchley Park eighty years ago opened the door to the new information age.

    And what happens there this week will open the door to a new age of AI. Where no life is needlessly cut short by cruel illnesses like cancer.

    A world where near-limitless clean energy is the norm. Where our children have personalised education that unlocks their hidden talents and where we have more time to do the elements of our jobs we are passionate about rather than tedious paperwork and administration.

    Because as we meet tonight, I truly believe that we are at a crossroads in human history.  To turn the other way would be a monumental missed opportunity for mankind.

    Every time a transformational technology has emerged it has brought with it new risks.

    The motor car created road accidents, but in turn we created seatbelts and established rules of the road.  AI is no different.

    Our Summit this week affords us an unmissable opportunity to forge a path ahead where we can form those rules of the road together as an international community.

    This is a chance to unify behind the goal of giving people in every corner of the globe confidence that AI will work for humanity and not against it.

    Thank you.