Hello and thank you for having me here today, it is a pleasure to be in Washington.
Now from the outset I must confess I have brought a numerous amount of British bugs with me, and so if I end up coughing, spluttering, drying up, please forgive me and bear with me, but I will do my very best throughout the speech.
And there is a reason that my first speech on the subject of online safety, since the UK’s world leading Online Safety Act passed is taking place here in the United States. Because the UK and the USA obviously share a special relationship that is fundamentally about our values.
The Online Safety Act – which I want to talk about for a bit today – is about reaffirming our longstanding values and principles and extending them to the online world. Empowering adults, protecting free expression, standing up for the rule of law, and most importantly, protecting our children.
These are the values that Britain has pioneered for centuries, and they are also the values that made the extraordinary story of the United States possible.
In the most recent chapter of that story, the transformational power of the internet has created the online world that is increasingly, seamlessly intertwined with the real world. But the values that made our free, safe, liberal societies possible have not been reflected online – especially when it comes to social media.
The guardrails, customs and rules that we have taken for granted offline have, in the last two decades, become noticeable in their absence online. FOSI have been an important part of the conversation to identify this problem, and I want to extend my thanks to you for all the tireless work that you’ve done on this incredibly important agenda.
And thanks to the work of campaigners here and in the UK, lawmakers from Washington to Westminster have taken the issue of online safety increasingly seriously, especially when it comes to the protection of our children.
And today I want to share with you how we rose to the challenge of online safety in the UK – what we did, how we did it, and I guess why we did it as well.
I think the why of that equation is the best place to start, given FOSI’s role in helping to answer that question over the years. Now, my department was created back in February to seize the opportunities of our digital age. Not just the opportunities that are in front of our generation now, but the opportunities that will potentially shape the futures of our children and our grandchildren.
My 6-month-old son will grow up thinking nothing of his ability to communicate with people thousands of miles away and, I hope, he’s going to go on and do much more. Sharing research with his school friends potentially, learning new languages about countries that he might not have even visited, and gaining new skills that will enable him to fully take advantage of his talents when he grows up. Of course, if you ask my husband, he will tell you he hopes that those talents will lead him to the Premier League football.
But we cannot afford to ignore the dangers that our children increasingly face online and I do think it is a sobering fact that children nowadays are just a few clicks away from entering adulthood, whether that’s opening a laptop or picking up an iPad.
And despite the voluntary efforts of companies and the incredible work of campaigners, the stats tell us unequivocally that voluntary efforts are simply not enough.
Did you know that the average age that a child sees pornography is 13? When I first heard that, it really, really struck me as something that needs to be dealt with. And a staggering 81% of 12–15-year-olds have reported coming across inappropriate content when surfing the web, including sites promoting suicide and self-harm.
Now, regardless of ideology or political party, I don’t think anyone can look at what’s happening to our children and suggest that a hands-off approach that has dominated so far is working.I believe that we have a responsibility and in fact a duty to act when the most vulnerable in our society are under an increasing threat – especially our children.
So, when I stood in the House of Commons during the Bill’s passage, I said enough is enough – and I meant it.
Now, I defy any person who says it cannot or should not be done – as adults it is our fundamental duty to protect children and be that shield for them against those who wish to do them harm. And that is why in the UK, I have been on somewhat of a mission to shield our children through the Online Safety Act.
And we started with the obvious – applying the basic common-sense principles of what is illegal offline, should actually be illegal online. Quite simply if it is illegal in the streets – it should be illegal in the tweets.
No longer will tech companies be able to run Wild West platforms where they can turn a blind eye to things like terrorism and child abuse. The days of platforms filled with underage users, when even adverts are tailored to those underage users, are now over.
If you host content only suitable for adults, then you must use highly effective age assurance tools to prevent children from getting access.
We can and we will prevent children from seeing content that they can never unsee – pornography, self harm, serious violence, eating disorder material – no child in Britain will have to grow up being exposed to that in the future and I think that that is quite remarkable. Because when we consider the impact that that content is having on our children, it is quite frankly horrific.
Of course, we know that most websites and all the major social media platforms already have some policies in place to safeguard children – in a few days I am travelling to Silicon Valley to meet many of them, and what I will be telling them, is that the Online Safety Act is less about companies doing what the Government is asking them to do – it is about the companies doing what their users are asking them to do.
Most companies actually do have robust and detailed terms of service. In fact, all of the 10 largest social media platforms in the world ban sexism, they ban racism, homophobia, and just about every other form of illegal abuse imaginable.
Yet these terms are worthless unless they are enforced – and too often, they are not consistently enforced.
So, the legislation that we have produced in the UK will mean that social media platforms will be required to uphold their own terms and conditions.
For the first time ever, users in Britain can sign up to platforms knowing that the terms they agree with will actually be upheld, and that the platforms will face eye-watering fines if they fail to do so.
But do not make the mistake of thinking that this Act is anti-business. Far from it, we view the Online Safety Act as a chance to harness the good that social media can do whilst tackling the bad, and because we believe in proportionality and innovation, we have not been prescriptive in how social media giants and messaging platforms should go about complying.
I believe it’s never the role of the Government to dictate to business which technologies they use. Our approach has remained ‘tech neutral’ and business friendly.
To borrow an American phrase, we are simply ensuring that they step up to the plate and to use their own vast resources and expertise to provide the best possible protections for children.
And I know this matters on the other side of the Atlantic too, because the online world does not respect borders, and those who wish to do our children harm should not be undeterred by this sense that they can get away with it in some countries and not in others, or that they should be able to use this to their advantage.
And that is why in the UK, we are taking steps to enable our online safety regulator, Ofcom, to share information with regulators overseas including here.
These powers will complement existing initiatives, such as the Global Online Safety Regulators Network. A vital programme – which of course was launched at the FOSI conference last year – bringing together like-minded regulators to promote and protect human rights.
And this momentum has been backed up by government action too. I am talking about the US Administration establishing an inter-agency Kids Online Health and Safety Task Force, and both of these are very welcome signs of the increasing unity between the UK and the US on this important agenda.
Many of the aims perfectly complement what we are trying to do in the UK and I am keen that both our governments continue to work together.
And while protecting children has remained our priority throughout the legislative process, we have been incredibly innovative with the way that we help protect adults online too. I believe when it comes to adults, we must take a different approach to the one that we take for children.
Liberty and free expression are the cornerstones of the UK’s uncodified constitution, and of course at the heart of the US Constitution and Bill of Rights. So when thinking about protecting adults online, we knew we could not compromise these fundamental principles.
In fact, I believe that the Act would have to actively promote and protect freedom and liberty for adults if it were to be successful in the long term, and that’s exactly what we did.
So rather than tell adults what legal content they can and cannot see, we instead decided to empower adults with freedom and choice – on many platforms for the very first time. Known as user empowerment tools, the Bill requires companies to finally give adults a direct choice over the types of content they see and engage with.
Taking the power out of the hands of unaccountable algorithms and placing it back in the hands of each and every individual user. Where an adult does not want to see certain types of legal content, they will have the power to toggle that content on and off as they choose, and in some cases, filter out keywords.
Choice, freedom, and control for adults, while robustly protecting children at the same time. Combined together, these form the framework that we believe will become the global norm for online safety in the decades ahead.
Now, just finally, while the glow of our successful Global AI Safety Summit is still bright, I want to touch briefly on the challenges of AI when it comes to online safety.
We are discussing ‘New Frontiers in Online Safety’ today – and it is impossible to do that without talking about the technology that will define this century.
Although AI brings enormous opportunities – from combating climate change to discovering life-saving drugs, to obviously helping our public services, it does also bring grave risk too – including on online safety, and we saw that just the other month in southern Spain, where fake, nude images of real girls had been created using AI – a case that shocked us all.
And recently in Britain, fake AI-generated audio also targeted the leader of the opposition and spread rapidly on social media before being promptly debunked. So, we must be clear about the serious threat AI presents to our societies, from our children’s safety to our democratic processes and the integrity of our elections, something that we both care acutely about as we march towards our elections.
And that is why we hosted the first ever AI Safety Summit earlier this month at Bletchley Park, where 28 countries and the European Union were represented, representing the vast majority of the world’s population. And we signed an unprecedented agreement known as the Bletchley Declaration.
Despite some claiming that such a declaration would be rejected by many countries in attendance, we actually agreed that for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and of course responsible.
But I have been clear that when it comes to online safety, especially for our children, we cannot afford to take our eye off the ball in the decade to come.
And the historic Bletchley Declaration lays out a pathway for countries to follow together that will ultimately lead to a safer online world, but it is up to us all to ensure that we continue down that pathway.
And In support of that mission, I have directed the UK’s Frontier AI Taskforce to rapidly evolve into a new AI Safety Institute, giving our best and brightest minds a key role to really delve into the risks that AI presents as well as the pre-deployment testing. And of course, it will partner with the US’s own Safety Institute which the Vice President announced in London during the summit.
We must also recognise AI can of course be part of the solution to many of the problems we are discussing today, as well – from detecting and moderating harmful content to proactively mitigating potential risks like the generation and dissemination of deep fakes.
FOSI’s new report, published today – does provide important insights on the early use of generative AI tools by parents and teens, and how it will impact children’s safety and privacy online.I will be taking these findings back to my officials in London and ensuring that we deepen the already close relationship between our two countries when it comes to protecting our children.
Now, while I hope my speech today has been somewhat of a soft-sell if you like for the online safety framework that we have created in the UK, I actually don’t think our approach really requires salesmanship to the rest of the world. Because even before our Online Safety Act became law, companies began implementing key parts of its provisions and adapting their behaviour.
Many social media platforms now allow keyword filtering, some have started exploring and piloting age assurance methods, and many are proactively cleaning up illegal content through new innovative techniques.
So, if there is one thing I want to say to American policymakers who want to make a real difference for children and adults online, it’s be ambitious, put children first, front and centre, and above all, defend the values that you would expect to see on the streets as ferociously online as you would in person.
As the online world and the offline world merge ever closer together, now is the time to stand firm and uphold the values that we share, and the values that got us here in the first place.