Tag: Speeches

  • Rishi Sunak – 2023 Speech on AI Safety Summit

    Rishi Sunak – 2023 Speech on AI Safety Summit

    The speech made by Rishi Sunak, the Prime Minister, at Bletchley Park on 2 November 2023.

    It was here at Bletchley Park where codebreakers including the British genius Alan Turing cracked the Enigma cipher…

    …and where we used the world’s first electronic computer.

    Breakthroughs which changed the possibilities for humanity.

    So there could be nowhere more fitting for the world to come together…

    …to seize the opportunities of the greatest breakthrough of our own time….

    …while giving people the peace of mind that we will keep them safe.

    I truly believe there is nothing in our foreseeable future that will be more transformative for our economies, our societies and all our lives…

    ….than the development of technologies like Artificial Intelligence.

    But as with every wave of new technology, it also brings new fears and new dangers.

    So no matter how difficult it may be…

    ….it is the right and responsible long-term decision for leaders to address them.

    That is why I called this Summit….

    …and I want to pay tribute to everyone who has joined us, and the spirit in which they have done so.

    For the first time ever, we have brought together CEOs of world-leading AI companies….

    … with countries most advanced in using it….

    …and representatives from across academia and civil society.

    And while this was only the beginning of the conversation,

    I believe the achievements of this summit will tip the balance in favour of humanity.

    Because they show we have both the political will and the capability to control this technology and secure its benefits for the long-term.

    And we’ve achieved this in four specific ways.

    Until this week, the world did not even have a shared understanding of the risks.

    So our first step was to have open and inclusive conversation to seek that shared understanding.

    We analysed the latest available evidence on everything from social harms like bias and misinformation…

    …to the risks of misuse by bad actors…

    …through to the most extreme risks of even losing control of AI completely.

    And yesterday, we agreed and published the first ever international statement about the nature of all those risks.

    It was signed by every single nation represented at this summit covering all continents across the globe…

    …and including the US and China.

    Some said, we shouldn’t even invite China…

    ….others that we could never get an agreement with them.

    Both were wrong.

    A serious strategy for AI safety has to begin with engaging all the world’s leading AI powers.

    And all of them have signed the Bletchley Park Communique.

    Second, we must ensure that our shared understanding keeps pace with the rapid deployment and development of AI.

    That’s why, last week I proposed a truly global expert panel to publish a State of AI Science report.

    Today, at this summit, the whole international community has agreed.

    This idea is inspired by the way the Intergovernmental Panel on Climate Change was set up to reach international science consensus.

    With the support of the UN Secretary General…

    …every country has committed to nominate experts.

    And I’m delighted to announce that Turing Prize Winner and ‘godfather of AI’ Yoshua Bengio…

    …has agreed to chair the production of the inaugural report.

    Third, until now the only people testing the safety of new AI models…

    …have been the very companies developing it.

    That must change.

    So building on the G7 Hiroshima process and the Global Partnership on AI…

    …like-minded governments and AI companies have today reached a landmark agreement.

    We will work together on testing the safety of new AI models before they are released.

    This partnership is based around a series of principles which set out the responsibilities we share.

    And it’s made possible by the decision that I have taken – along with Vice President Kamala Harris…

    ….for the British and American governments to establish world-leading AI Safety Institutes…

    …with the public sector capability to test the most advanced frontier models.

    In that spirit I very much welcome the agreement of the companies here today to deepen the privileged access that the UK has to their models.

    Drawing on the expertise of some of the most respected and knowledgeable AI experts in the world…

    …our Safety Institute will work to build our evaluations process in time to assess the next generation of models before they are deployed next year.

    Finally, fulfilling the vision we have set to keep AI safe is not the work of a single summit.

    The UK is proud to have brought the world together and hosted the first summit.

    But it requires an ongoing international process…

    …to stay ahead of the curve on the science…

    …and see through all the collaboration we have begun today.

    So we have agreed that Bletchley Park should be the first of a series of international safety summits…

    …with both Korea and France agreeing to host further summits next year.

    The late Sir Stephen Hawking once said that –

    “AI is likely to be the best or worst thing to happen to humanity.”

    If we can sustain the collaboration that we have fostered over these last two days…

    …I profoundly believe that we can make it the best.

    Because safely harnessing this technology could eclipse anything we have ever known.

    And if in time history proves that today we began to seize that prize…

    …then we will have a written a new chapter worthy of its place in the story of Bletchley Park…

    …and more importantly, bequeathed an extraordinary legacy of hope and opportunity for our children and the generations to come.

  • Michelle Donelan – 2023 Speech at the AI Safety Summit

    Michelle Donelan – 2023 Speech at the AI Safety Summit

    The speech made by Michelle Donelan, the Secretary of State for Science, Innovation and Technology, in Milton Keynes on 1 November 2023.

    Good morning, everybody.

    It is my privilege to welcome you all to the first ever global summit on Frontier AI safety.

    During a time of global conflict eight decades ago, these grounds here in Bletchley Park were the backdrop to a gathering of the United Kingdom’s best scientific minds, who mobilized technological advances in service of their country and their values.

    Today we have invited you here to address a sociotechnical challenge that transcends national boundaries, and which compels us to work together in service of shared security and also shared prosperity.

    Our task is as simple as it is profound: to develop artificial intelligence as a force for good.

    The release of ChatGPT, not even a year ago, was a Sputnik moment in humanity’s history.

    We were surprised by this progress — and we now see accelerating investment into and adoption of AI systems at the frontier, making them increasingly powerful and consequential to our lives.

    These systems could free people everywhere from tedious work and amplify our creative abilities.

    They could help our scientists unlock bold new discoveries, opening the door to a world potentially without diseases like cancer and with access to near-limitless clean energy.

    But they could also further concentrate unaccountable power into the hands of a few, or be maliciously used to undermine societal trust, erode public safety, or threaten international security.

    However, there is a significant debate that is very robust…and I am sure it’s going to be very robust with the attendees over the next two days.

    Just about whether these risks will materialise.

    How they will materialise.

    And, potentially, when they will materialise.

    Regardless, I believe we in this room have a responsibility to ensure that they never do.

    Together, we have the resources and the mandate to uphold humanity’s safety and security, by creating the right guardrails and governance for the safe development and deployment of frontier AI systems.

    But this cannot be left to chance, neglect, or to private actors alone.

    And if we get this right – the coming years could be what the computing pioneer J.C.R. Licklider foresaw as “intellectually the most creative and exciting in the history of humankind.”

    This is what we are here to discuss honestly and candidly together at this Summit.

    Sputnik set off a global era of advances in science and engineering that spawned new technologies, institutions, and visions, and led humanity to the moon.

    We, the architects of this AI era — policymakers, civil society, scientists, and innovators — must be proactive, not reactive, in steering this technology towards the collective good.

    We must always remember that AI is not some natural phenomenon that is happening to us, but it is a product of human creation that we have the power to shape and direct.

    And today we will help define the trajectory of this technology, to ensure public safety and that humanity flourishes in the years to come.

    We will work through four themes of risks in our morning sessions, which will include demonstrations by researchers from the UK’s Frontier AI Taskforce.

    Risks to global safety and security…

    … Risks from unpredictable advances,

    … from loss of control,

    … and from the integration of this technology within our societies.

    Now, some of these risks do already manifest as harms to people today and are exacerbated by advances at the frontier.

    The existence of other risks is more contentious and polarizing.

    But in the words of mathematician I.J. Good, a codebreaker colleague of Turing himself here at Bletchley Park, “It is sometimes worthwhile to take science fiction seriously.”

    Today, is an opportunity to move the discussion forward from the speculative and philosophical further towards the scientific and the empirical.

    Delegations and leaders from countries in attendance have already done so much work in advance of arriving…

    …across a diverse geopolitical and geographical spectrum to agree the world’s first ever international statement on frontier AI – the Bletchley Declaration on AI Safety.

    Published this morning, the Declaration is a landmark achievement and lays the foundations for today’s discussions.

    It commits us to deepening our understanding of the emerging risks of frontier AI.

    It affirms the need to address these risks – as the only way to safely unlock extraordinary opportunities.

    And it emphasises the critical importance of nation states, developers and civil society, in working together on our shared mission to deliver AI safety.

    But we must not remain comfortable with this Overton window.

    We each have a role to play in pushing the boundaries of what is actually possible.

    And that is what this afternoon will be all about, to discuss what actions different communities will need to take next, and to bring out diverse views, to open up fresh ideas and challenge them.

    For developers to discuss emerging risk management processes for AI safety, such as responsible, risk-informed capability scaling.

    For national and international policymakers to discuss pathways to regulation that preserve innovation and protect global stability.

    For scientists and researchers to discuss the sociotechnical nature of [safety], and approaches to better evaluate the risks.

    These discussions will set the tone of the Chair’s summary which will be published tomorrow. They will guide our collective actions in the coming year.

    And this will lead up to the next summit, that I am delighted to share with you today will be hosted by the Republic of Korea in six months’ time. And then by France in one year’s time.

    These outputs and this forward process must be held to a high standard, commensurate with the scale of the challenge at hand.

    We have successfully addressed societal-scale risks in the past.

    In fact, within just two years of the discovery of the hole in the Antarctic ozone layer, governments were able to work together to ratify the Montreal Protocol, and then change the behaviour of private actors to effectively tackle an existential problem.

    We all now look back upon that with admiration and respect.

    But for the challenges posed by frontier AI, how will future generations judge our actions here today?

    Will we have done enough to protect them?

    Will we have done enough to develop our understanding to mitigate the risks?

    Will we have done enough to ensure their access to the huge upsides of this technology?

    This is no time to bury our heads in the sand. And I believe that we don’t just have a responsibility, we also have a duty to act – and act now.

    So, your presence here today shows that these are challenges we are all ready to meet head on.

    The fruits of this summit must be clear-eyed understanding,  routes to collaboration, and bold actions to realise AI’s benefits whilst mitigating the risks.

    So, I’ll end my remarks by taking us back to the beginning.

    73 years ago, Alan Turing dared to ask if computers could one day think.

    From his vantage point at the dawn of the field, he observed that “we can only see a short distance ahead, but we can see plenty there that needs to be done.”

    Today we can indeed see a little further, and there is a great deal that needs to be done.

    So, ladies and gentlemen, let’s get to work.

  • Michelle Donelan – 2023 Speech at the Guildhall on AI

    Michelle Donelan – 2023 Speech at the Guildhall on AI

    The speech made by Michelle Donelan, the Secretary of State for Science, Innovation and Technology, on 30 October 2023.

    Thank you – it is a pleasure to join you all this evening.

    We have some of the most exciting and innovative thinkers in the world of AI and beyond around the room tonight.

    And of course we are immensely grateful to the City of London for kindly hosting us in this fantastic venue this evening.

    But for our City of London friends here tonight who were hoping for a night off from the numbers and the balance sheets I am afraid you are going to have to wait a bit longer because the UK’s AI balance sheet tells such an extraordinary story that can’t be ignored.

    With 1% of the world’s population, we have built the 3rd largest AI sector in the world.

    We have rocketed ourselves to a 688% increase in AI companies basing themselves here in less than a decade.

    UK AI scaleups are raising almost double that of France, Germany and the rest of Europe combined.

    And more money is invested into AI safety here than in any other country in the world.

    By the end of the decade – our AI sector will be worth half a trillion dollars.

    By 2035, it is predicted to be double that. A trillion-dollar AI sector here in the UK.

    For context, that is equal to the value of our entire tech sector today.

    But as the numerous AI startups and scaleups around the room tonight will know, the numbers only tell part of the story.

    The true value of course is the 700,000 hours of time saved for doctors in hospitals and teachers in our schools.

    On our roads, AI models are piloting a new age of electric, self-driving cars which may one day eliminate road death.

    And in some of our classrooms, AI is instantly translating lessons into any language – including Ukrainian for our refugees who have recently settled here.

    But we are only just scratching the surface.  We stand at a pivotal juncture in human history.

    What Alan Turing predicted many decades ago is now coming to fruition.

    Machines are on the cusp of matching humans on equal terms in a range of intellectual domains – from mathematics to visual arts through to fundamental science.

    As Turing foresaw, this progress has not come without opposition.

    Yet the potential for good is limitless if we forge a thoughtful path ahead.

    What could the future really look like?

    The pioneering American computer scientist J.C.R. Licklider envisioned a symbiotic partnership between humans and machines.

    Licklider predicted this could lead to the most “intellectually creative and exciting” period in human history.  But to get there, we must be transparent with the public.

    And we need to show beyond doubt that we are tackling these risks head-on.

    That is why, last week we became the first country in the entire world to communicate to its citizens a clear explanation of what the risks at the frontier of AI could be.

    This drew upon genuine world-leading expertise, including from many of you in this room, and which will lead the conversation not just at home but across the globe.

    Because science fiction is no longer fiction. Science fiction is now science reality.

    Just a few years ago, the most advanced AI systems could barely write coherent sentences.

    Now they are writing poetry, helping doctors detect cancer and generating photorealistic images in a split second.

    But with these incredible advances, come alongside risks.

    And we refuse to bury our heads in the sand.

    We cannot ignore or dismiss the countless experts who tell us plain and simple that there are risks of humans losing control, that some model outputs could become completely unpredictable and that the societal impacts of AI advances could seriously disrupt safety and security here at home.

    The Summit will be a moment where we move this discussion forward from the speculative and philosophical. To the scientific and empirical.

    AI is not some phenomenon that is happening to us, it is a force we have the power to shape and direct.

    I believe we have a responsibility to act now.

    That is why, since I was first appointed Secretary of State I have sought to grip these issues with every tool at my department’s disposal.

    Through our Frontier AI Taskforce – chaired by leading tech entrepreneur Ian Hogarth – we have built an engine of AI experts to help us tackle these risks head-on.

    We have brought in some of the best and brightest talent in the world.

    From civil society such as the Lovelace Institute and the Centre for Long-Term Resilience, to academics from our leading universities, to researchers from industry leaders.

    Just as the Covid Vaccine Taskforce made us one of the first countries in the world to roll out a working Covid vaccine, this taskforce is making the UK the strongest and most agile country in the world when it comes to AI safety.

    In recent months, our taskforce has recruited renowned experts to guide its work including one of the Godfathers of AI, Yoshua Bengio and GCHQ Director Anne Keast-Butler.

    And it has partnered with leading technical organisations including ARC Evals and the Centre for AI Safety to better understand the risks of frontier AI systems.

    We now want to turbocharge this momentum. To fulfil our pledge to become the intellectual and geographical home of AI.

    Which is why the Prime Minister announced just last week, that the next step in this journey will be turning our taskforce into a new AI Safety Institute based right here in the UK.

    This Institute will lead a global effort in understanding the risks we’ve publicly talked about and stopping them before they actually pose risk.

    It will also carry out research into new safety methods so we can get ahead of the curve and ensure developers are using the right tools at the right time to manage risks.

    The work and findings of this institute will shape policy not just domestically but internationally too – helping developers and partner governments innovate safely and collaboratively.

    This is not just the right approach I would argue it is the only approach.  AI knows no geographical boundaries. The risks cut across borders, cultures and societies across the globe.

    That is why the Summit must not be seen as the end of a journey, nor as a blunt tool to fix the problem in one swoop.

    As AI evolves over time, our collective response must evolve too.

    We have to distinguish between the high risk work at the frontier of AI, and the vast majority of companies whose development is much lower risk.

    A one-size-fits-all system that ignores these important nuances will be destined to fail, and will stop us reaping the enormous benefit for our society that so many of you here tonight represent.

    Making that 0.1% at the frontier safer will benefit both them and the remaining 99.9% of the sector – allowing us to improve consumer confidence and adoption across society.

    Because we should be unapologetically pro-innovation, pro-business, and pro-safety. We must not pull up the drawbridge to innovation.

    Our approach to AI will be the building blocks for creating a legacy for generations to come.

    Indeed, I am delighted to announce that after the curtain falls on our global AI Safety Summit, Bletchley Park will get its first-ever, permanent AI summit exhibition.

    What happened at Bletchley Park eighty years ago opened the door to the new information age.

    And what happens there this week will open the door to a new age of AI. Where no life is needlessly cut short by cruel illnesses like cancer.

    A world where near-limitless clean energy is the norm. Where our children have personalised education that unlocks their hidden talents and where we have more time to do the elements of our jobs we are passionate about rather than tedious paperwork and administration.

    Because as we meet tonight, I truly believe that we are at a crossroads in human history.  To turn the other way would be a monumental missed opportunity for mankind.

    Every time a transformational technology has emerged it has brought with it new risks.

    The motor car created road accidents, but in turn we created seatbelts and established rules of the road.  AI is no different.

    Our Summit this week affords us an unmissable opportunity to forge a path ahead where we can form those rules of the road together as an international community.

    This is a chance to unify behind the goal of giving people in every corner of the globe confidence that AI will work for humanity and not against it.

    Thank you.

  • Tom Tugendhat – 2023 Speech on Fraud and AI

    Tom Tugendhat – 2023 Speech on Fraud and AI

    The speech made by Tom Tugendhat, the Security Minister, on 31 October 2023.

    It’s an enormous pleasure to be with you and I’m very grateful to be back at RUSI.

    I gave my first foreign policy speech when I took over the Chairmanship of the Foreign Affairs Committee here.

    I know RUSI’s vision has always been to inform, influence and enhance public debate to help build a safer and more stable world.

    The mission has endured for 200 or so years now. The mission has not changed but the medium has.

    Today the range of challenges we face has never been greater.

    So it’s right that here, at the home of strategic thinking, we’re gathering to build on the foundations of those who shaped our security in the generations before us to make sure that endures for the generations to come.

    So a profound thanks to our hosts, and also to you all, for being here on the eve of the first major global summit on AI security.

    As with the summit itself, we have representatives here from government, from industry, from civil society, academia, and law enforcement.

    Whatever your profession, whatever sector you represent, you are here because we need you.

    Because we need each other.

    Like so many areas of my responsibility, the government cannot do this alone.

    Our role in government is to understand the threats that we face and target resources, helping others to come together and meet our challenges in the most effective way possible.

    You can tell a lot about a government from the operating system they build for society.

    Some countries build a system that are designed to control.

    Other build a system designed to exploit.

    Here in the UK we build systems that are designed to liberate.

    To free individual aspiration and creativity for the benefit of all.

    And that’s what security means to me.

    It’s not a means of closing things down.

    It’s about creating the conditions required to open up a society.

    A safe environment in which ideas can take root, and opportunity is available to all.

    That’s why we need to get this right.

    Because technology as transformative as AI will touch every part of our society.

    If we succeed, hardworking families up and down the country will reap the benefits.

    If we don’t we will all pay the price.

    The stakes are very high, but coming together today, in this way today sends the right message.

    There are two core themes for the programme today. They come from different eras.

    The first is fraud, which in its various guises, is as old as crime itself.

    When Jacob stole Esau’s inheritance by passing himself off as his brother, that was perhaps the first description of fraud in the Bible.

    The first record of fraud actually is possibly older, it dates from a fraud case related to copper ingots and is recorded 4000 years in Babylon.

    The last time I spoke about Babylon in RUSI I was in uniform describing how I was one of many armies to have camped under its walls.

    The challenges posed by Artificial Intelligence are comparatively new.

    Its democratisation will bring about astonishing opportunities for us all.

    Sadly that includes criminals.

    We know that bad actors are quick to adopt new technologies.

    Unchecked, AI has the power to bring about a new age of crime.

    Already we’re seeing large language models being marketed for nefarious purposes.

    One chatbot being sold on the darkweb – FraudGPT – claims to be able to draft realistic phishing emails:

    mimicking the format used by your bank, and even suggesting the best place to insert malicious links.

    That doesn’t just have implications for the realism of scams.

    It has huge implications for their scale as well.

    I don’t want to be in a situation where individuals can leverage similar technologies to pull off sophisticated scams at the scale of organised criminal gangs.

    We don’t want to find the Artful Dodger has coded up into Al Capone.

    At a fundamental level, fraudsters try to erase the boundary between what’s real and what’s fake.

    Until relatively recently, that was a theoretical risk.

    It wasn’t so long ago that I believed I was immune to being fooled online.

    That is, until I saw a viral picture of the Pope in a coat.

    Not just any coat.

    A fashionable puffer jacket that wouldn’t look out of place on the runway in Paris.

    One that my wife assured me was ‘on trend’.

    I quickly forgot about it.

    That is, until I learned that that image wasn’t actually of the Pope at all.

    It was created on Midjourney. Using AI.

    On the one hand it was a harmless gag, Pope Francis had never looked better.

    On the other hand, it left me deeply uneasy.

    If someone so instantly recognisable as the Holy Father could be wholly faked, what about the rest of us?

    The recent Slovakian elections showed us how this could work in practice.

    Deepfake audio was released in the run up to polling day.

    It purported to show a prominent politician discussing how to rig the vote.

    The clip was heard by hundreds of thousands of individuals.

    Who knows how many votes it changed – or how many were convinced not to vote at all.

    This is of course an example of a very specific type of fraud.

    But all fraudsters blur the boundary between fact and fiction.

    They warp the nature of reality.

    It does not take a massive leap of imagination to see the implications of that in the fraud space.

    Thankfully, relatively few AI-powered scams have come to light so far.

    However, the ones that have highlight the potential of AI to be used by criminals to defraud people of their hard-earned cash.

    The risks to citizens, businesses and our collective security are clear.

    A few lines of code can act like Miracle Gro on crime, and the global cost of fraud is already estimated to be in the trillions.

    In the United Kingdom, fraud accounts for around 40% of all estimated crime.

    There’s an overlap with organised crime, terrorism and hostile activity from foreign states.

    It is in a very real sense a threat to our national security.

    But while there is undoubtedly a need to be proactive and vigilant, we need not despair.

    And the wealth of talent, insight and expertise I see in front of me here gives me hope.

    For the Government’s part, we are stepping up our counter-fraud efforts through the comprehensive strategy we published this summer and the work of Anthony Browne, my friend, who is the Anti-Fraud Champion.

    Fraud is a growing, transnational threat, and has become a key component of organised criminality and harm in our communities. So international co-operation is essential.

    That’s why the UK will host a summit in London next March to agree a co-ordinated action plan to reform the global system and respond to this growing threat.

    We expect Ministers, law enforcement and intelligence agencies to attend from around the world.

    The Online Safety Act which has completed its passage through Parliament and will require social media and search engine companies to take robust, proactive action to ensure users are not exposed to user-generated fraud or fraudulent advertising on their platforms.

    And we are working on an Online Fraud Charter with industry that includes innovative ways for the public and private sector to work together to protect the public, reduce fraud and support victims.

    This will build on the charters that are already agreed with the accountancy, banking, and telecommunications sectors to combat fraud, which have already contributed to a significant reduction in scam texts and a 13% fall in reported fraud in the last year.

    New technologies don’t just bring about risk.

    They create huge opportunities too.

    AI is no different.

    We know that the possibilities are vast, endless even.

    What’s more it’s essential.

    As the world grows more complex, only advanced intelligence systems can meet the task before us.

    We need the AI revolution to deliver services and supply chains in an ever more globalised world.

    I’m particularly interested in the question of how we can harness this new power in the public safety arena.

    As we will hear shortly, AI is already driving complex approaches to manage risk, protect from harm and fight criminality.

    There is a real-world benefit in combating fraud and scams, such as payment processing software that is stopping millions of scam texts from reaching potential victims.

    No doubt I’ve barely scratched the surface, and there’s lots more excellent work going on.

    What we absolutely have to do is break down any barriers that might exist between the different groups represented here this evening.

    The only people who benefit from a misaligned, inconsistent approach are criminals, so it’s critical that we work hand in glove, across sectors and borders.

    I want to come back to the point I started on.

    For me AI and the security it enables is an essential part of the State’s responsibility to keep us all safe.

    It’s not to increase our control.

    Not to keep people in a box.

    But to set people free.

    We cannot eliminate risk, but we can understand it.

    Using AI to map and measure today’s environment will ensure we do that.

    The pursuit of progress is essential to human experience.

    And the reality is that even if we wanted to, we cannot put the genie back in the bottle.

    That does not mean, though, that we simply sit back and what and see what happens.

    We can’t be passive in the face of this threat.

    So what I want us to be thinking about is how we move forward.

    Well, the way I see it there are three key questions that align to the aims of the AI Safety Summit:

    • The first, how do we build safe AI models that are resilient to criminal intent?
    • Second, as the vast majority of fraud starts online, how do we harness AI to ensure that harmful content is quickly identified and removed?
    • And lastly, what do governments need to be doing globally to balance progress and growth with safety and security?

    That’s far from an exhaustive list.

    But I think by addressing these core questions we can put ourselves on the right path.

    So, thank you once again for being here; thank you RUSI for hosting us, I hope you will find it a valuable exercise.

    And most of all I hope we can look back and say that today was a day when we took important steps forward in our shared mission to reduce the risks and seize the opportunities associated with AI. I remain hugely optimistic, but that optimism depends on the work we do today together.

  • Steve Barclay – 2023 Speech at the IHPN Annual Summit and Dinner

    Steve Barclay – 2023 Speech at the IHPN Annual Summit and Dinner

    The speech made by Steve Barclay, the Secretary of State for Health and Social Care, on 31 October 2023.

    Our focus at the Department of Health and Social care is to diagnose and treat conditions quicker.

    Because this makes patient outcomes better, but it’s also much cheaper to deliver.

    That’s the sweet spot that we’re focused on hitting.

    One where patients and taxpayers are both better off.

    This approach underpins everything we as a department are doing.

    From our pharmacy first rollout through to the lung cancer screening programme.

    And it’s why, David [David Hare, Chief Executive of the Independent Healthcare Providers Network (IHPN)], I strongly support working with the independent sector.

    You collectively have a key role to play in diagnosing and treating conditions, and delivering the improved patient outcomes we all want to see.

    We are speeding up the diagnosis of major diseases.

    As a result of more referrals and screening, the percentage of cancer patients presenting in emergencies fell by more than 15% between 2010 and 2022.

    And we must keep driving these rates down.

    A University College London study found patients diagnosed in emergencies are half as likely to survive 12 months than those diagnosed through non-emergency routes, like GPs.

    And this isn’t just about cancer, around 8 in 10 heart failures are diagnosed in emergency departments.

    So, we need to get these numbers down to help save lives.

    And to do so, we need to turbocharge testing and diagnostics.

    That’s why, as an example, we’re rolling out more blood pressure checks than ever before.

    But this is far from the only challenge that our health system faces.

    The pandemic left behind – as colleagues in the room are well aware – very large backlogs, not least in elective care.

    But while we often focus on the challenges of COVID, it also showed us opportunities, the way to do things differently.

    The NHS worked effectively with the independent sector to maximise capacity and to tackle a common challenge.

    Many people in this room contributed to this effort.

    And the government – and people across the country – are grateful for how you cleared your schedules to provide NHS care to patients most in need.

    Now, this partnership must be sustained if we are to tackle those COVID backlogs.

    That’s why we launched, last year, the Elective Recovery Taskforce.

    It united the public and independent sectors with a clear goal: using every bit of available capacity to cut waiting lists.

    And David, I want to recognise the contribution the IHPN made to the taskforce.

    And I was delighted your leadership throughout the pandemic was recognised with an MBE.

    As the taskforce rightly concluded, one way of better using available capacity is delivering meaningful patient choice.

    We know three-quarters of patients are willing to travel to get care quicker.

    And we know improving choice can reduce waiting times by up to 3 months.

    That’s why we’ve committed to giving patients a choice of 5 providers at GP referrals, including those from the independent sector.

    Allowing patients to choose where they’re treated based on what matters most to them.

    That may be shorter waiting time.

    It may be seeing a particular doctor.

    Or it may be receiving care closer to home.

    [Political content removed]

    So from today, patients in England who have been waiting more than 40 weeks for treatment will have the right to request to be seen elsewhere.

    And that’s an opportunity opened up to around 400,000 patients who will be eligible.

    Hospitals will contact them to see how far they’re willing to travel.

    And if they request to move, integrated care boards must make every effort to find hospitals with shorter waiting lists.

    Not just within the NHS, but also across the independent sector.

    If they find shorter waiting lists, integrated care boards must give patients the choice to transfer for faster care.

    And these reforms have the Prime Minister’s personal backing.

    They will help more patients exercise their right to choose, and through that, help cut waiting lists.

    But our elective taskforce wasn’t just about driving reform from the centre, it also focused on empowering integrated care boards themselves and independent providers to cut waiting lists at a local level.

    The taskforce assessed different ways of doing so at 2 ICBs.

    Both of which delivered results.

    Leicester, Leicestershire and Rutland saw an increase in independent sector activity of more than 70%.

    And by July, just 15 patients across Birmingham and Solihull were waiting more than 18 months for treatments.

    This shows how effectively ICBs and the independent sector can work together to cut waiting lists at a local level.

    And I know how critical independent oversight of choice, as it is now rolled out, is to giving confidence to investors.

    That’s why, working with David and colleagues, the taskforce recommended an independently chaired panel to promote genuine choice and fair procurement.

    And today, I can confirm the panel will be up and running by January.

    Now, the Elective Recovery Taskforce has achieved a lot.

    But perhaps its greatest success has been turbocharging the rollout of community diagnostic centres, or CDCs – the one-stop shops where patients receive tests for conditions like cancer and heart disease.

    In his 2020 review, Mike Richards set out his vision for CDCs – a radical investment and reform of diagnostic services, putting care at the heart of communities.

    Governments of all stripes have been criticised for prioritising investment in acutes over community services.

    But we’ve made community diagnostic centres a reality.

    One hundred and twenty seven CDCs are already open.

    Many are on high streets, in car parks, or even outside football stadiums.

    Giving patients care closer to home.

    Increasing NHS capacity.

    Reducing pressure on hospitals.

    And getting patients lifesaving diagnostics faster.

    Community diagnostic centres are the biggest investment in MRI and CT scanning capacity in the NHS’s history.

    And over the course of the programme, we will have increased our stock of scanners by almost a third.

    The independent sector has, of course, been key to this success.

    Thirteen CDCs will be run by independent providers, 8 are already operational, and 22 CDCs on the NHS estate use the independent sector’s diagnostics capabilities.

    The independent sector’s investments in CDCs has saved an estimated £110 million from the NHS capital budget.

    Money we put straight back into a further 7 community diagnostic centres.

    Giving patients better care, and delivering better value for money for the taxpayer.

    And today, I’m delighted to announce we’ll have opened 160 community diagnostic centres by March – hitting our target a year early.

    We’ve moved the opening dates for 40 CDCs, bringing them forward into this year.

    Our decision to do that was criticised at the time.

    But getting these CDCs open is why we’ve beaten our target.

    It’s why more patients will receive potentially lifesaving checks sooner.

    And I will never forget the reason that matters.

    When I was with the Prime Minister, in of all places, an Asda car park in Nottingham – talking to a guy, Terrence, at a lung cancer screening truck there.

    He was a heavy smoker.

    And he said to me he would never have gone to hospital to be checked.

    He would have been too worried to do so.

    But because it was in the car park of the Asda store, he had a lung cancer screening check.

    It had been positive, and diagnosed him at a much earlier stage.

    He said, “I’d have never gotten checked going to hospital, but the scanner has been so easy to get it done.”

    And we want more people across the country to do what Terrence did, and to get the tests they need as quickly as possible.

    That’s why today, I’m pleased to announce 3 of the final locations for our community diagnostic centres.

    Sites at Queen Mary’s Hospital in Sidcup, in Halifax, and in Bognor Regis, all of which will open this December – each one providing tens of thousands of vital checks every year.

    Last month, NHS England also confirmed the approval of 4 more community diagnostic centres – 2 in Wiltshire run by the independent sector, one in Thanet, and one in Cheshire.

    And we’re committed to transparency as well as delivery.

    I think the public has a right to know when their local CDCs will open.

    And more importantly, once they’ve opened, how they’re performing.

    That’s why we’re introducing an online dashboard to make this information easily accessible, alongside details of new hospital builds and upgrades.

    And it’s why, before the end of the year, we’re committed to publishing data on the number of MRI and CT scanners that are operational across the independent sector and the NHS.

    Community diagnostic centres have shown us how the public and independent sectors can deliver together.

    And across the health system, there are many more challenges we can overcome.

    Take training.

    To deliver the huge training expansion the NHS Long Term Workforce Plan commits to – doubling medical training places, almost doubling adult nursing places – we need the independent sector to give its strongest support.

    You carry out collectively a huge volume of procedures, and this in turn creates many training opportunities.

    And your role in training will continue to be an important part of our wider partnership in the years ahead.

    It’s not only the Long Term Workforce Plan that’s key to building that sustainable NHS; it also important that we maximise efficiency, and invest in the latest technology.

    AI – on which the Prime Minister is chairing a global summit today – is a key part of that.

    By the end of this year, every stroke network in England will have AI technology that can examine brain scans an hour faster.

    And this matters.

    Saving an hour can cut a stroke patient’s risk of suffering long-term consequences by as much as two-thirds.

    So again, this isn’t theory around AI.

    This is something that will be deployed in every one of our stroke centres by December, saving an hour on diagnosis and having real-terms benefits on patients’ outcomes.

    And as we announced yesterday, almost half of NHS acute trusts have won a share of £21 million that we’re investing in AI.

    This will accelerate the analysis of X-rays and CT scans for suspected lung cancer patients.

    And studies into these technologies have shown the very real promise they offer.

    Let me give you an example.

    One carried out at Calderdale and Huddersfield NHS Trust suggests AI can process scans in less than 8 seconds, reducing radiologists’ workloads by 28% and reducing waiting times for suspected lung cancer patients by more than 70%.

    And as the Prime Minister announced last week, we’re investing a further £100 million to use AI to unlock treatments for diseases that are incurable today.

    Be that the novel treatments for dementia, or the vaccines for cancer.

    So, AI isn’t a silver bullet, but I’m determined to explore how it can get patients the care they need faster.

    So, in conclusion, our priorities are very clear.

    To diagnose patients faster.

    To give them more choice and control.

    To embrace technology and innovation.

    To deliver training for the long term.

    And by working in partnership, David, with your colleagues in the independent sector, we can deliver all of these things.

    So, together, let’s build an NHS that puts the patient first.

    Thank you very much.

  • Lucy Frazer – 2023 Speech on Journalism Matters Week

    Lucy Frazer – 2023 Speech on Journalism Matters Week

    The speech made by Lucy Frazer, the Secretary of State for Culture, Media and Sport, in London on 30 October 2023.

    Journalism matters.

    That’s what this week in the year is about.

    But it isn’t something that we should just acknowledge this week.

    It is something we should acknowledge every month. Every week. Every day.

    Your work in holding people, organisations and countries to account.

    Your reporting without fear or favour.

    Calling out wrongful activity and evil.

    Combatting mis and disinformation.

    These are the signs of a true democracy.

    Freedom of the press is not actually about media freedom.

    It is about our freedom.

    Through your reporting you are protecting the freedom of others.

    And in a world of social media, mainstream media plays a critical role.

    Your fact checked, legal proofed, thorough work, allows truth to prevail in an increasingly uncertain world.

    And what you do takes courage.

    Immense courage.

    I know all of you in the room right now will be worried about your colleagues.

    Those working in war zones across the globe.

    Me too.

    Your journalists are putting their lives on the line for truth, freedom and democracy.

    The events in Israel and Gaza have recently made me think of other journalists.

    The American Journalist James Foley who was abducted in Syria and beheaded by ISIS.

    Evan Gershkovich the Wall Street Journalist detained in Russia.

    The 14 civilian journalists and media workers who have been killed in the line of duty since Russia invaded Ukraine .

    And of course, we are all thinking of the journalists covering the conflict in Israel and Gaza right now.

    Nine journalists have sadly already lost their lives.

    And my thoughts are with all of them and their families,

    And all of you, who no doubt keenly feel the loss of your friends.

    And it’s not just you and your colleagues’ courage in the face of war.

    But the courage in dealing with issues which evoke hate and abuse on social media.

    Or the full force of legal threats and pressure from the rich and powerful who try to keep their secrets secret.

    Courage to stand up to unscrupulous regimes and people in positions of power.

    I want you to know that we, in government, have your back.

    We understand what you are doing and why it is so important.

    It’s why we launched the National Action Plan for the Safety of Journalists.

    …Why we are legislating to make it harder for powerful people to stop the publication of investigative journalism through unscrupulous lawsuits.

    And why we expanded the National Committee for the Safety of Journalists to deal with these legal threats.

    It’s why we are supporting you through the Digital Markets Bill which should enable you to get fair terms and fair compensation when your work is hosted on digital platforms.

    And it is why we will remove a threat to freedom of the press by repealing S40 so costs are not a bar to investigative journalism.

    You do not need telling that your courage has shaped our history.

    Bringing down the Nixon administration in Watergate.

    Exposing Harvey Weinstein as a sexual predator beginning the Me Too movement.

    And from as early as the Crimean War sending your stories home, exposing the horrors of war and the bravery of soldiers. And contributing to the creation of the Victoria Cross.

    And right now, again, our media is shaping our future.

    I have watched and read those brave, passionate voices who are standing against Hamas and calling out antisemitism.

    Highlighting the 1,353% per cent rise in antisemitism in London alone.

    Spotlighting the 4 Jewish schools that shut.

    Decrying the fact that children going to Jewish schools have been advised not to wear their blazer to prevent them from becoming a terrorist target

    Pointing out the disturbing scenes at the airport in Dagestan.

    Those journalists.

    Calling out the denial by some, of the atrocities, carried out by a terrorist group.

    Calling out the tearing down of posters of abducted children.

    And your fact checked, legal proofed, thorough work…

    Your truth telling.

    Will make a difference to our future.

    We only have to look back less than 100 years.

    To what happened when conspiracy theories were rife, antisemitism the norm and no one in society called it out…

    Because of this failure of society as a whole the German people were ready to accept the ‘annihilation of the Jewish race in Europe’ as Hitler infamously put it.

    In 1941 Joseph Goebbels wrote that the fate the Jewish people were meeting was deserved and that no one should pity or regret it.

    And most people didn’t bat an eyelid.

    And your calling out matters not just to the Jews. But to all of us in this room. And beyond.

    Because it is well known that when there is prejudice against any minority it should be a concern for all minorities.

    Unfortunately the Jewish community knows what can come next.

    Hitler didn’t just aim to wipe out the Jews, he targeted anyone who was gay, black or a traveller.

    Your calling out matters because terrorist organisations like Hamas share the principles of terrorist organisations like al Qaeda and ISIS.

    And they are no friends of the west.

    Their hatred is not confined to Jews, or Israel. They hate all of us who share western values.

    And Hamas’ actions in no way support or help the plight of the Palestinian people.

    Who are also entitled to safety and to live in peace.

    We are also seeing a worrying rise in anti-Muslim hatred

    Terrorists use the media as a weapon of war.

    Truth must prevail.

    And for that, journalism matters.

    Not just this week.

    Today, Tomorrow. Next year. And every year after that.

    I want to end with a message someone in Israel sent me last Monday.

    It is a copy of a statement published by The Shin Bet, the Israeli Security Services and Israeli police summarising the result of the investigation into the massacre of over 1400 people on 7 October.

    And, I quote…

    The investigation revealed, following the interrogation of six terrorists who were captured that Hamas offered significant financial incentives to anyone who successfully kidnapped an Israeli with abductors promised $10000 and a free apartment.

    The detainees stated that the instructions were to kidnap elderly women and children.

    According to one detainee his commanders ordered the terrorists to behead Jews and rape women and girls.

    Courage. Freedom. Truth.

    Thank you for your role in our democracy.

  • Anne-Marie Trevelyan – 2023 Speech at the South China Sea Conference

    Anne-Marie Trevelyan – 2023 Speech at the South China Sea Conference

    The speech made by Anne-Marie Trevelyan, the UK Minister for the Indo-Pacific, in Vietnam on 25 October 2023.

    Excellencies, ladies and gentlemen, friends. It’s good to be here with you in person this morning to show my support for an area of geopolitical importance.

    I am especially glad to be here this year during the 50th anniversary year of diplomatic relations between the United Kingdom and Vietnam. We are a close partner with Vietnam on maritime security and remain committed to strengthening our collaboration.

    I’m here because what happens in the South China Sea matters globally. As you’re aware, almost 60% of global maritime trade passes through the South China Sea. This makes it vital that all parties enjoy the same freedoms to navigate and exercise in the South China Sea.

    Russia’s illegal invasion of Ukraine offers an alarming example of the pain inflicted when supply chains are disrupted by conflict. Rising energy and food prices are harming the world’s most vulnerable people.

    Like you, the UK is committed to avoiding any such outcome in this region. We seek to preserve a free and open Indo-Pacific. We want to deepen relationships with our partners, support sustainable development and tackle the shared challenges we all face.

    What does this mean for the South China Sea? It means supporting stability and working together on climate change.

    It also means establishing and maintaining open lines of communication. That is the most effective way of managing tensions. Failing to do so risks escalation. You in this region know, more than anyone, the potentially catastrophic consequences that this could have. As the UK deepens its long-term partnership with ASEAN and others in the Indo-Pacific, we are committed to helping you to de-escalate tensions and maintain stability.

    That is why our commitment to the UN Convention on the Law of the Sea is unwavering. Last year, on its 40th anniversary, I reiterated the important role UNCLOS plays in setting the legal framework for activities in our seas and oceans.

    The UK takes no sides in the sovereignty disputes in the South China Sea, but we oppose any activity that undermines or threatens UNCLOS’ authority – including attempts to legitimise incompatible maritime claims.

    The recent instances of unsafe conduct against Vietnamese and Filipino vessels has demonstrated the serious risks posed to regional peace and stability. When we see actions that violate UNCLOS we will call them out – as we did following events around the Second Thomas Shoal this week. And we will support our partners to shine a light on this so-called ‘grey zone’ activity that creates tensions and risks escalation.

    Our ambassador in Manila joined partners this July in reiterating that the 2016 Arbitral Award is a significant milestone in resolving disputes peacefully and is legally binding on China and the Philippines. We call on both parties to abide by the findings of those proceedings.

    Our partnership with ASEAN supports our shared commitment for a free and open Indo-Pacific. We respect and admire the central role ASEAN has played in maintaining regional stability and prosperity.

    We look forward to working with you on advancing the ASEAN Outlook on the Indo-Pacific, with maritime cooperation being a key pillar. We also congratulate ASEAN on issuing its first Maritime Outlook and holding its first maritime Solidarity Exercise.

    I am also grateful to my Indonesian counterparts for their work as ASEAN chair this year in progressing negotiations for a Code of Conduct in the South China Sea.

    The UK strongly believes in the need for an agreement that is consistent with UNCLOS and reflects the interests and guarantees the rights and freedoms of all parties – including third countries. The UK’s Carrier Strike Group will soon return to the region to demonstrate these rights and freedoms in practice.

    Let me turn now to what the UK can offer.

    Like ASEAN, we hope that a sea of conflict can become a sea of cooperation. There is no more urgent need for cooperation than on environmental degradation. Pressures on fisheries, the destruction of the marine environment and rising sea levels pose an existential threat to the millions of people who rely on the South China Sea for their livelihoods.

    This is why we have launched new projects – including through our ASEAN dialogue partnership – to conserve the sea for our future generations.

    Our Just Energy Transition Partnerships, signed with Vietnam and Indonesia, encourage the early retirement of high-emitting coal fired power plants, investment in renewable energy and overcome barriers to support an inclusive and just transition.

    Our Blue Planet Fund, worth half a billion pounds, includes over £150 million for the COAST programme. This is designed to help vulnerable coastal communities across the region improve their climate resilience and become more sustainable.

    Other Blue Planet Fund programmes focus on tackling plastic pollution – a key ASEAN objective; testing innovative mechanisms to mobilise blue finance; protecting coral reefs; and commissioning studies into the impact of climate security risks.

    Furthermore, to sustain the South China Sea’s vital role as a provider for fish and livelihoods, this year the UK announced funding of £2.5 million on Illegal, Unreported and Unregulated Fishing – another key priority of the ASEAN Maritime Outlook. This support will help the coastal communities, fragile ocean ecosystems, and global food supply chains that face devastation. We have already started work with partners in the Philippines and we want to expand the scope of similar practical projects with countries in this region, including here in Vietnam.

    The UK also continues to support our regional partners’ resilience and security through our ASEAN-UK Maritime Cooperation Programme. We are helping to build capacity on maritime law and providing training and sharing expertise in Exclusive Economic Zone management, maritime domain awareness, and hydrographic research.

    Through our bids to join the ASEAN Regional Forum, and the ASEAN Defence Ministers’ Meeting Plus, we propose to make even stronger commitments to regional security and stability.

    In conclusion, the UK’s commitment in this region is steadfast. The peace and prosperity of the South China Sea must remain a priority for all. I wish you all a productive and successful conference and look forward to the rest of my time here in Vietnam to learn even more personally.

    Thank you.

  • Rishi Sunak – 2023 Speech on AI

    Rishi Sunak – 2023 Speech on AI

    The speech made by Rishi Sunak, the Prime Minister, at the Royal Society in London on 26 October 2023.

    I’m delighted to be here at the Royal Society, the place where the story of modern science has been written for centuries.

    Now, I’m unashamedly optimistic about the power of technology to make life better for everyone.

    So, the easy speech for me to give – the one in my heart I really want to give…

    …would be to tell you about the incredible opportunities before us.

    Just this morning, I was at Moorfields Eye Hospital.

    They’re using Artificial Intelligence to build a model that can look at a single picture of your eyes…

    …and not only diagnose blindness, but predict heart attacks, strokes, or Parkinson’s.

    And that’s just the beginning.

    I genuinely believe that technologies like AI will bring a transformation as far-reaching…

    …as the industrial revolution, the coming of electricity, or the birth of the internet.

    Now, as with every one of those waves of technology, AI will bring new knowledge…

    …new opportunities for economic growth, new advances in human capability…

    …and the chance to solve problems that we once thought beyond us.

    But like those waves, it also brings new dangers and new fears.

    So, the responsible thing for me to do – the right speech for me to make – is to address those fears head on…

    …giving you the peace of mind that we will keep you safe…

    …while making sure you and your children have all the opportunities for a better future that AI can bring.

    Now, doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.

    So, I won’t hide them from you.

    That’s why today, for the first time, we’ve taken the highly unusual step…

    …of publishing our analysis on the risks of AI…

    …including an assessment by the UK intelligence communities.

    These reports provide a stark warning.

    Get this wrong, and AI could make it easier to build chemical or biological weapons.

    Terrorist groups could use AI to spread fear and destruction on an even greater scale.

    Criminals could exploit AI for cyber-attacks, disinformation, fraud, or even child sexual abuse.

    And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely…

    …through the kind of AI sometimes referred to as ‘super intelligence’.

    Indeed, to quote the statement made earlier this year by hundreds of the world’s leading AI experts:

    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

    Now, I want to be completely clear:

    This is not a risk that people need to be losing sleep over right now.

    I don’t want to be alarmist.

    And there is a real debate about this – some experts think it will never happen at all.

    But however uncertain and unlikely these risks are, if they did manifest themselves, the consequences would be incredibly serious.

    And when so many of the biggest developers of this technology themselves warn of these risks…

    …leaders have a responsibility to take them seriously, and to act.

    And that is what I am doing today – in three specific ways.

    First, keeping you safe.

    Right now, the only people testing the safety of AI…

    …are the very organisations developing it.

    Even they don’t always fully understand what their models could become capable of.

    And there are incentives in part, to compete to build the best models, quickest.

    So, we should not rely on them marking their own homework, as many of those working on this would agree.

    Not least because only governments can properly assess the risks to national security.

    And only nation states have the power and legitimacy to keep their people safe.

    The UK’s answer is not to rush to regulate.

    This is a point of principle – we believe in innovation, it’s a hallmark of the British economy…

    …so we will always have a presumption to encourage it, not stifle it.

    And in any case, how can we write laws that make sense for something we don’t yet fully understand?

    So, instead, we’re building world-leading capability to understand and evaluate the safety of AI models within government.

    To do that, we’ve already invested £100m in a new taskforce…

    …more funding for AI safety than any other country in the world.

    And we’ve recruited some of the most respected and knowledgeable figures in the world of AI.

    So, I’m completely confident in telling you the UK is doing far more than other countries to keep you safe.

    And because of this – because of the unique steps we’ve already taken – we’re able to go even further today.

    I can announce that we will establish the world’s first AI Safety Institute – right here in the UK.

    It will advance the world’s knowledge of AI safety.

    And it will carefully examine, evaluate, and test new types of AI…

    …so that we understand what each new model is capable of…

    …exploring all the risks, from social harms like bias and misinformation, through to the most extreme risks of all.

    The British people should have peace of mind that we’re developing the most advanced protections for AI of any country in the world.

    Doing what’s right and what’s necessary to keep you safe.

    But AI does not respect borders.

    So we cannot do this alone.

    The second part of our plan is to host the world’s first ever Global AI Safety Summit next week, at Bletchley Park – the iconic home of computer science.

    We’re bringing together the world’s leading representatives…

    …from Civil Society…

    …to the companies pioneering AI…

    …and the countries most advanced in using it.

    And yes – we’ve invited China.

    I know there are some who will say they should have been excluded.

    But there can be no serious strategy for AI without at least trying to engage all of the world’s leading AI powers.

    That might not have been the easy thing to do, but it was the right thing to do.

    So, what do we hope to achieve at next week’s Summit?

    Right now, we don’t have a shared understanding of the risks that we face.

    And without that, we cannot hope to work together to address them.

    That’s why we will push hard to agree the first ever international statement about the nature of these risks.

    Yet AI is developing at breath taking speed.

    Every new wave will become more advanced, better trained, with better chips, and more computing power.

    So we need to make sure that as the risks evolve, so does our shared understanding.

    I believe we should take inspiration from the Intergovernmental Panel on Climate Change…

    …which was set up to reach an international scientific consensus.

    So, next week, I will propose that we establish a truly global expert panel…

    …nominated by the countries and organisations attending …

    …to publish a State of AI Science report.

    Of course, our efforts also depend on collaboration with the AI companies themselves.

    Uniquely in the world, those companies have already trusted the UK with privileged access to their models.

    That’s why the UK is so well-placed to create the world’s first Safety Institute.

    And at next week’s Summit I will work together with the companies and countries to deepen our partnerships.

    My vision, and our ultimate goal, should be to work towards a more international approach to safety…

    …where we collaborate with partners to ensure AI systems are safe before they are released.

    And so to support this, we will make the work of our Safety Institute available to the world.

    That’s the right thing to do morally, in keeping with the UK’s historic role on the international stage.

    And it’s also the right thing economically, for families and businesses up and down the country.

    Because the future of AI is safe AI.

    And by making the UK a global leader in safe AI, we will attract even more of the new jobs and investment that will come from this new wave of technology.

    Just think for a moment about what that will mean for our country.

    The growth it will catalyse, the jobs it will create, the change it can deliver –for the better.

    And that’s the third part of our plan – to make sure that everyone in our country can benefit from the opportunities of AI.

    We’ve already got strong foundations.

    Third in the world for tech, behind only the US and China.

    The best place in Europe to raise capital.

    All of the leading AI companies – choosing the UK as their European headquarters.

    The most pro-investment tax regime…

    The most pro-entrepreneur visa regime, to attract the world’s top talent…

    …and the education reforms to give our own young people the skills to succeed.

    And we’re going to make it even easier for ambitious people with big ideas to start, grow, and compete in the world of AI.

    That’s not just about having the technical skills, but the raw computing power.

    That’s why we’re investing almost a billion pounds in a supercomputer thousands of times faster than the one you have at home.

    And it’s why we’re investing £2.5bn in quantum computers, which can be exponentially quicker than those computers still.

    To understand this, consider how Google’s Sycamore quantum computer…

    …can solve a maths problem in 200 seconds, that would take the world’s fastest supercomputer 10,000 years.

    And as we invest more in our computing power, we’ll make it available for researchers and businesses, as well as government…

    …so that when the best entrepreneurs in the world think about where they want to start and scale their AI businesses, they choose the UK.

    And finally, we must target our scientific efforts towards what I think of as AI for good.

    Right across the western world, we’re searching for answers to the question of how we can improve and increase our productivity.

    Because that’s the only way over the long-term to grow our economy and raise people’s living standards.

    And in a million different ways, across every aspect of our lives, AI can be that answer.

    In the public sector, we’re clamping down on benefit fraudsters…

    …and using AI as a co-pilot to help clear backlogs and radically speed up paperwork.

    Just take for example, the task of producing bundles for a benefits tribunal.

    Before, a week’s work could produce around 11.

    Now – that takes less than an hour.

    And just imagine the benefits of that rolled out across the whole of government.

    In the private sector, start-ups like Robin AI are revolutionising the legal profession…

    …writing contracts in minutes, saving businesses and customers time and money.

    London-based Wayve is using sophisticated AI software to create a new generation of electric, self-driving cars.

    But more than all of this – AI can help us solve some of the greatest social challenges of our time.

    It can help us finally achieve the promise of nuclear fusion, providing abundant, cheap, clean energy with virtually no emissions.

    It can help us solve world hunger, by making food cheaper and easier to grow…

    …and preventing crop failures by accurately predicting when to plant, harvest or water your crops.

    And AI could help find novel dementia treatments or develop vaccines for cancer.

    That’s why today we’re investing a further £100m to accelerate the use of AI…

    …on the most transformational breakthroughs in treatments for previously incurable diseases.

    Now I believe nothing in our foreseeable future will be more transformative for our economy, our society, and all our lives, than this technology.

    But in this moment, it is also one of the greatest tests of leadership we face.

    It would be easy to bury our heads in the sand and hope it’ll turn out alright in the end.

    To decide it’s all too difficult, or the risks of political failure are too great.

    To put short-term demands ahead of the long-term interest of the country.

    But I won’t do that.

    I will do the right thing, not the easy thing.

    I will always be honest with you about the risks.

    And you can trust me to make the right long-term decisions…

    …giving you the peace of mind that we will keep you safe…

    …while making sure you and your children have all the opportunities for a better future that AI can bring.

    I feel an extraordinary sense of purpose.

    When I think about why I came into politics…

    Frankly, why almost anyone came into politics…

    It’s because we want to make life better for people…

    …to give our children and grandchildren a better future.

    And we strive, hour after hour, policy after policy, just trying to make a difference.

    And yet, if harnessed in the right way, the power and possibility of this technology…

    …could dwarf anything any of us have achieved in a generation.

    And that’s why I make no apology for being pro-technology.

    It’s why I want to seize every opportunity for our country to benefit in the way I’m so convinced that it can.

    And it’s why I believe we can and should, look to the future with optimism and hope.

    Thank you.

  • Michelle Donelan – 2023 Speech at Onward on the Future of AI

    Michelle Donelan – 2023 Speech at Onward on the Future of AI

    The speech made by Michelle Donelan, the Secretary of State for Science, Innovation and Technology, on 24 October 2023.

    Firstly let me say a massive thank you to Onward.

    I said when I first launched this new and exciting Department that people could expect to see a constant drumbeat of action from my officials, from my ministers and my entire team…

    What I didn’t expect was for Onward to take that challenge on too!

    From your brilliant report on generative AI earlier this year, to Allan Nixon’s Wired for Success Report which gave us insights into DSIT that illumined Whitehall.

    And I hear through the grapevine that Onward are publishing another AI-focused report in the coming weeks, so I look forward to reading that.

    It is safe to say that the world is now wide awake to the opportunities and the challenges posed by artificial intelligence.

    In the last 3 years alone, MPs have mentioned Artificial Intelligence more times in the House of Commons than they did in the entire 30 years before that.

    And when I stood up to give my first speech as the Secretary of State for the Department for Science, Innovation and Technology, I made it clear that we were going to be different and we were going to do things differently.

    So, when it comes to AI, I think it is especially important because we cannot afford for DSIT to be a normal ‘business as usual’ Whitehall department.

    Nor can we deliver extraordinary things without more extraordinary people in our Departmental team.

    We need to be as agile, innovative and as accessible just like the entrepreneurs and businesses and researchers that we represent.

    And I am pleased to report that we have done exactly that.

    Over a matter of months we have added world-renowned AI experts to our Department and Taskforce…

    From renowned AI professors like David Krueger and Yarin Gal…

    To one of the Godfathers of AI, Yoshua Bengio….

    With our skills, Frontier AI Taskforce and our global AI Safety Summit, it is clear that the UK is perfectly placed to lead the international charge on seizing the opportunities of AI whilst gripping the risks.

    However, today I want to get beyond the statistics and go deeper into the philosophy that is driving our approach to AI.

    Many of you will have heard me talk about AI safety being the UK’s priority, and how we can only truly utilise the extraordinary benefits AI has to offer once we have tackled some of the safety challenges associated with the frontier.

    To some this may sound overcautious, or that we are being driven by fear of the risks rather than optimism about the opportunities. But actually I think it is rather about the opportunities that we are focusing on.

    Today, I want to smash these myths head on.

    Here at Onward, I want to set out how the UK’s approach to safety and security in AI will make it the best place in the world for new AI companies to not only grow but locate here.

    Safety and security are key to unlocking innovation.

    The country which tackles key AI safety risks  first will be the first to fully take advantage of the huge potential that AI has to provide.

    That is why the UK is putting more investment into these questions right now than any other country in the world.

    Questions like “how do we prevent misuse of Frontier AI by malicious actors?”

    And “how do we ensure we don’t lose human control and oversight of this new technology?”

    And how can we protect our society including our democratic system.

    Think of how air travel – once considered a dangerous new technology by many – is now one of the safest and most beneficial technologies in human history.

    We got there by working with countries across the world to make sure we have the right safety mechanisms in place –  and now we all reap the benefits of flying safely.

    Safety is absolutely critical to unlocking adoption across our economy.

    Boosting consumer confidence is what will make the difference between a country taking a few years to adopt new technology into their lives, or a few decades.

    I want to make sure the UK is at the forefront of reaping the benefits of this transformative technology.

    Our approach to AI has been commended for being agile, open and innovative.  But we need more research to guide our approach.

    In many cases, we simply don’t understand the risks in enough detail or certainty right now because this an emerging technology that is developing quicker than any other technology in human history.

    It took mankind just over a lifetime to go from the horse and cart to the space race.

    Yet in the last four years large language models have gone from barely being able to write a coherent sentence to now being able to pass the bar exam and medical exams and who knows what large language models have are set to hold.

    So the pace of development is fast and unpredictable which means our focus needs to be on understanding the risks.

    And in life I do think its important to understand the problem before rushing to solutions. And with AI this should be no different.

    That is why we established the Frontier AI Taskforce and have appointed a world-leading research team to turbo-charge our understanding of frontier AI with expert insights.

    The Taskforce is already making rapid progress, forging partnerships with industry and developing innovative approaches to addressing the risks of AI and harnessing its benefits.

    The Global AI Safety Summit is also an opportunity to build that understanding, share learnings and establish a network globally to work together to ensure our research can keep up with this transformative emerging technology.

    Indeed, one of the key objectives of the Summit is to form an agreement on what the key risks actually are.

    By bringing countries, leading tech organisations, academia and civil society together, the UK will lead the international conversation on frontier AI.

    The global nature of AI means that international concerted action is absolutely critical. AI doesn’t stop at geographical boundaries and nor should our approach.

    But of course, we do also need to make sure we have the right domestic approach in place to drive safe, responsible AI innovation.

    Earlier this year we set out a principles-based approach through the AI Regulation White Paper.

    Our approach is agile, targeted and sector-specific.

    We here in the UK understand that AI use cases are drastically different across different sectors.

    A one size fits all system that treats agri-tech the same way as military drones because they both use AI is unreasonable and undeliverable.

    By empowering existing regulators to regulate AI in their own sectors where they have their own expertise, we have created the most tailored and responsive regulatory regime anywhere in the world.

    We have also supported different sectors with a Central risk function – horizon scanning.

    Later this year, we will publish a full response to our White Paper – showing how our approach is keeping pace with this transformative technology.

    So, what does this all mean for small businesses?

    The regulatory approach set out in the White Paper is specifically designed to be flexible, support innovation and ensure that small, new and challenger AI companies can grow and succeed here in the UK.

    And indeed, we are already taking steps to deliver on those aims by working with the Digital Regulation Cooperation Forum to pilot a new advisory service for AI and digital innovators so companies can bring their products more quickly and safely to market.

    We want the UK to remain one of the most nimble and innovative places in the world for AI companies to grow.

    That is why it is right to reaffirm our commitment to our principles-based approach to regulation whilst also taking bold steps to address risks at the frontier,

    investing in world-class research capabilities and working closely with industry and civil society to make sure our AI governance approach keeps pace.

    Our proportionate and targeted approach will enable us to foster innovation and encourage companies to grow to catch up with the frontier – because we are not pulling up the drawbridge –in fact what we want to do is give consumers and the public confidence to boost adoption and it will ensure we can seize the opportunities safely.

    Far from a race to the bottom, the key AI developers across the world and here in the UK are telling me they are looking for countries where they will have certainty, clarity and support when it comes to building and deploying AI safely.

    They want a mature, considered and agile approach to AI that maximises the potential for innovation by mitigating the risks.

    They want to open up in countries where consumers are open-minded and excited about using their AI tools to improve their lives, which is why with the global AI Safety Summit, we are not only talking about risks, but also talking about opportunities for the benefit of mankind.

    And the proof is in the pudding Open AI, Anthropic have opened their international offices here.

    I want the UK people to use AI with the same confidence and lack of fear as we do when we book an airplane ticket.

    And I want AI companies to know that the UK is the most up-to-date, agile and economically rewarding place in the world to build their business in.

    So, to all the smaller AI companies out there, let me send this message out to you today: the UK is and will remain the most agile and innovative place for you to develop your business.

    Safety at the frontier means prosperity across the sector.

    We will grip the risks so that we can seize the opportunities.

    Thank you.

  • Oliver Dowden – 2023 Speech to the Future Investment Initiative

    Oliver Dowden – 2023 Speech to the Future Investment Initiative

    The speech made by Oliver Dowden, the Deputy Prime Minister, in Riyadh on 24 October 2023.

    Today’s theme is ‘defining dynamism amid global shocks’.

    And there could be no more apt place to discuss dynamism than Saudi Arabia.

    The pace of change in the Kingdom is dizzying:

    Asserting global leadership from the Gulf…

    …rocketing up the rankings for ease of doing business…

    …leapfrogging the world’s largest economies…

    …embracing technological change…

    … and transforming an economy fuelled by oil… into one powered by renewables…

    …making Vision 2030 not just a vision, but a reality.

    That is true dynamism: embracing change, and leading the charge.

    With your megacities and giga-projects, Saudi is not just adopting clean technologies but pioneering them…

    …delivering solutions that we will all be using in the future.

    So That’s why the UK is proud to partner with you in a huge array of areas, such as financial services, clean energy, urban regeneration, academia, defence, sports, e-gaming and more.

    Truly a partnership for the future.

    AGE OF SHOCKS

    But we do so in a world where shocks have become the new norm.

    We rightly refer to them as global shocks because their impact ripples from the epicentre right across our planet.

    The great financial crisis … the Covid pandemic … Russia’s invasion of Ukraine …

    …Record temperatures and devastating natural disasters…

    …and, of course, the brutal strike into the heart of Israel by Hamas terrorists just two weeks ago…

    …the very worst of humanity.

    Thousands of people have died horrifically… unnecessarily.

    Tens of thousands more are injured, or are in mourning.

    And millions are now living in fear of the consequences.

    This has caused untold misery and has led to deep, widespread insecurity.

    And we stand with all innocent victims of this conflict.

    Urging respect for international humanitarian law…

    And for parties to take every possible step to avoid harming civilians.

    And we welcome ongoing efforts to open up humanitarian access to Gaza…

    …we have pledged millions extra in aid…

    …and we remain committed to the two state solution.

    Britain stands together to reject terror, hate and prejudice.

    …and to reset the path to peace and long-term stability.

    TRADITIONAL SECURITY

    And as the Deputy Prime Minister of the United Kingdom, the Prime Minister has tasked me personally to drive cross-Government resilience towards shocks of all kinds.

    Understanding the nature of the threats we face today…

    …and scanning the horizon to predict the threats we may face tomorrow.

    The first duty of every government is to protect their civilians.

    Of course, our first line of defence is always our armed forces.

    Those brave men and women are our resilience personified.

    And the UK and Saudi Arabia have a proud partnership in security which stretches back into our history…

    …sharing intelligence, exchanging military hardware, training alongside one another…

    …and continuing into the future with our world-leading Typhoon jets.

    ECONOMIC SECURITY (AND TECH SHOCKS)

    But increasingly, the ripples of recent global shocks…

    …reverberate in an economic sense…

    …disrupting supply chains… driving up energy prices… and causing food shortages.

    And it on this economic front, where I am leading the UK’s charge to be out in front, in terms of our resilience…

    …developing and retaining critical domestic capabilities…

    …screening investment into UK companies…

    …protecting Government procurement from national security threats…

    …and better understanding our supply chains.

    As we scan the horizon, we see that rapid technological advancements will only make this task more urgent.

    We’ve had a glimpse into this future…

    …with cyberattacks bringing public services to a halt…

    …and ransomware wiping millions off companies’ share prices.

    Deepfakes have duped consumers…

    …bots have interfered in elections…

    …and intellectual property has been stolen from businesses and academic institutions.

    Now so far, these have been relative skirmishes…

    …wrought by an unholy alliance between hostile states and non-state actors.

    But with the enormous potential of artificial intelligence and quantum computing…

    …there is a very real possibility that the world’s next shock will be a tech shock.

    And so next week the United Kingdom will be convening the world’s leading nations and pioneering AI companies for the first global frontier AI safety summit.

    These emerging technologies represent exciting opportunities.

    …they exist at the cutting edge of development, often yet to be commercialised and with unknown end applications…

    But we also know that hostile state actors are actively seeking these technologies for their own competitive advantage…

    … or even to enhance their military capability.

    And the most valuable commodities to both businesses and nations are increasingly the source code… the technical designs… or other – intangible – intellectual property that underpins innovation.

    Where they have a military or dual-use application, traditional means of controlling these transfers are often simply not enough.

    These Intangible products can now be exported in a second – attached to an email…

    ….with no customs official to check any documentation…

    …nor a list of multilaterally agreed product categories to check against…

    …because these technologies have only just been invented…

    …often in small university spin-outs, rather than the established defence contractors used to working with Government.

    This dynamism in the tech sphere, must be met with dynamism within Government.

    Now I know that ‘Dynamism’ and ‘Government’, not, perhaps, two words which you often put together…

    But we cannot afford not to be…

    This is why I am reviewing our tools to ensure they are fit for purpose:

    • Examining our export regime controls, to ensure that it is striking the right balance for emerging technologies relevant to national security…
    • Exploring other paths through which this sensitive technology can leak out unchecked such as through outbound investment flows…
    • And working with academic institutions and start-ups to ensure they are alert to the risks, and have the toolkit to protect themselves.

    We need to build a policy environment that provides the private sector with the confidence to innovate…

    Confidence to build partnerships…

    Confidence to grow.

    Economic security should never be seen as a constraint on growth.

    It is an enabler of it.

    UK-KSA ECONOMIC PARTNERSHIP

    So just as allies work together on physical security, so we need to work together to build economic security.

    Just as important as the collaboration between nation states is the partnership between Government and business.

    Which is why, earlier this year, we established the National Protective Security Authority within MI5 – so that our security services can support business in understanding and protecting themselves against the threats they face.

    The partnership between the UK and Saudi Arabia is a fine example of the collaboration we need.

    We made the green finance deal made last year – ensuring we protect our energy needs for the future…

    We’ve made an agreement on critical minerals this year – enhancing our collaboration and exploring new sources of supplying these elements that are so vital to our future prosperity and national security.

    And through to next year I will personally be prioritising building the bond between our two kingdoms.

    So today I can announce that I will be leading a new strand of engagement with the Kingdom of Saudi Arabia  to enhance our cooperation and mutually-beneficial investment relationships, building on similar relations across the gulf.

    The partnership between our kingdoms has helped to shape the world we live in, and will be a linch-pin of shaping the future through to 2030.

    But just as important as the collaboration between nation states, is the partnership between government and business.

    So I will be Chairing a new Public-Private Forum between Government and business on economic security challenges… with the first meetings later this year.

    And I want to be very clear to all of you, that my door is always open to investors to discuss our economic security agenda.

    And our first task when Prime MInister Rishi Sunak was to restore the predictability and stability that investors so cherish in the United Kingdom.

    Our task now is to drive growth, jobs, prosperity and investment. And I know that the Kingdom of Saudi Arabia will be a key partner in that mission.

    PROSPERITY AGENDA

    But we should also never underestimate how much our peace, stability, and resilience to shocks are underpinned by our prosperity.

    A strong, growing economy doesn’t just allow you to invest in your armed forces….

    …it also allows you to deliver for your people…

    …it is a signal to the world that you are a serious partner and a key player.

    Those who will succeed in this age of uncertainty, as new economic powers vie for pole position…

    …are those with the fastest-growing, most vibrant, dynamic economies.

    And those nations – and those businesses – will get to shape the new global order.

    And the UK is laser-focused on that prosperity agenda.

    We are wide open for business…

    …a world-leader in climate solutions, life sciences and creativity…

    …a wonderful place to invest and innovate…

    …and a partner with whom to seize technological opportunities.

    Happily these areas where we excel are the areas where Saudi wants to grow.

    So your Vision 2030 is our vision too.

    We’re by your side…

    …with scores of fund managers flocking to Riyadh…

    …and hundreds of UK businesses operating all across Saudi.

    Meanwhile, of course London’s global financial centre remains committed to being the preferred hub for this part of the world…

    …thousands of Saudi students and tourists are in Britain…

    …and Saudi investment is benefitting every corner of our country.

    That is all part of a deepening partnership with the wider GCC – the UK’s 7th largest export market…

    …and with whom we hope to increase trade still further through a free trade agreement.

    Geopolitical shifts are a great challenge to all our economies…

    …but we can turn them into an opportunity to build a new world order based on rules, competition, open markets, innovation and investment.

    Because that is the definition of dynamism: turning challenges into opportunities.

    Not ignoring the threat of climate change but seizing the opportunities we have to build a green future.

    Not shunning artificial intelligence but using it to solve some of the greatest problems we face.

    Not turning inwards as new powers emerge and challenges arise, but forging new alliances and strengthening old ones.

    That is how we will withstand shocks, build resilience and embrace opportunities for all our people.

    Thank you.