CultureSpeeches

Margaret Hodge – 2022 Speech on the Online Safety Bill

The speech made by Margaret Hodge, the Labour MP for Barking, in the House of Commons on 5 December 2022.

I pay tribute to all the relatives and families of the victims of online abuse who have chosen to be with us today. I am sure that, for a lot of you, our debate is very dry and detached, yet we would not be here but for you. Our hearts are with you all.

I welcome the Minister to his new role. I hope that he will guide his Bill with the same spirit set by his predecessors, the right hon. Member for Croydon South (Chris Philp) and the hon. Member for Folkestone and Hythe (Damian Collins), who is present today and has done much work on this issue. Both Ministers listened and accepted ideas suggested by Back Benchers across the House. As a result, we had a better Bill.

We all understand that this is groundbreaking legislation, and that it therefore presents us with complex challenges as we try to legislate to achieve the best answers to the horrific, fast-changing and ever-growing problems of online abuse. Given that complexity, and given that this is our first attempt at regulating online platforms, the new Minister would do well to build on the legacy of his predecessors and approach the amendments on which there are votes tonight as wholly constructive. The policies we are proposing enjoy genuine cross-party support, and are proposed to help the Minister not to cause him problems.

Let me express particular support for new clauses 45 to 50, in the name of the right hon. Member for Basingstoke (Dame Maria Miller), which tackle the abhorrent misogynistic problem of intimate image abuse, and amendments 1 to 14, in the name of the right hon. and learned Member for Kenilworth and Southam (Sir Jeremy Wright), which address the issue of smaller platforms falling into category 2, which is now outside the scope of regulations. We all know that the smallest platforms can present the greatest risk. The killing of 51 people in the mosque in Christchurch New Zealand is probably the most egregious example, as the individual concerned used 8chan to plan his attack.

New clause 15, which I have tabled, seeks to place responsibility for complying with the new law unequivocally on the shoulders of individual directors of online platforms. As the Bill stands, criminal liability is enforced only when senior tech executives fail to co-operate with information requests from Ofcom. I agree that is far too limited, as the right hon. and learned Member for Kenilworth and Southam said. The Bill allows executives to choose and name the individual who Ofcom will hold to account, so that the company itself, not Ofcom, decides who is liable. That is simply not good enough.

Let me explain the thinking behind new clause 15. The purpose of the Bill is to change behaviour. Our experience in many other spheres of life tells us that the most effective way of achieving such change is to make individuals at the top of an organisation personally responsible for the behaviour of that organisation. We need to hold the chairmen and women, directors and senior executives to account by making those individuals personally liable for the practices and actions of their organisation.

Let us look at the construction industry, for example. Years ago, building workers dying on construction sites was an all too regular feature of the construction industry. Only when we reformed health and safety legislation and made the directors of construction companies personally responsible and liable for health and safety standards on their sites did we see an incredible 90% drop in deaths on building sites. Similarly, when we introduced corporate and director liability offences in the Bribery Act 2010, companies stopped trying to bribe their way into contracts.

It is not that we want to lock up directors of construction companies or trading companies, or indeed directors of online platforms; it is that the threat of personal criminal prosecution is the most powerful and effective way of changing behaviour. It is just the sort of deterrent tool that the Bill needs if it is to protect children and adults from online harms. That is especially important in this context, because the business model that underpins the profits that platforms enjoy encourages harmful content. The platforms need to encourage traffic on their sites, because the greater the traffic, the more attractive their sites become to advertisers; and the more advertising revenue they secure, the higher the profits they enjoy.

Harmful content attracts more traffic and so supports the platforms’ business objectives. We know that from studies such as the one by Harvard law professor Jonathan Zittrain, which showed that posts that tiptoe close to violating platforms’ terms and conditions generate far more engagement. We also know that from Mark Zuckerberg’s decisions in the lead-up to and just after the 2020 presidential elections, when he personally authorised tweaks to the Facebook algorithm to reduce the spread of election misinformation. However, after the election, despite officials at Facebook asking for the change to stay, he ensured that the previous algorithm was placed back on. An internal Facebook memo revealed that the tweak preventing fake news had led to “a decrease in sessions”, which made his offer less attractive to advertising and impacted his profits. Restoring fake news helped restore his profits.

The incentives in online platforms’ business models promote rather than prevent online harms, and we will not break those incentives by threatening to fine companies. We know from our experience elsewhere that, even at 10% of global revenue, such fines will inevitably be viewed as a cost to business, which will simply be passed on by raising advertising charges. However, we can and will break the incentives in the business model if we make Mark Zuckerberg or Elon Musk personally responsible for breaking the rules. It will not mean that we will lock them up, much as some of us might be tempted to do so. It will, however, provide that most powerful incentive that we have as legislators to change behaviour.

Furthermore, we know that the directors of online platforms personally take decisions in relation to harmful content, so they should be personally held to account. In 2018, Facebook’s algorithm was promoting posts for users in Myanmar that incited violence against protesters. The whistleblower Frances Haugen showed evidence that Facebook was aware that its engagement-based content was fuelling the violence, but it continued to roll it out on its platforms worldwide without checks. Decisions made at the top resulted in direct ethnic violence on the ground. That same year, Zuckerberg gave a host of interviews defending his decision to keep holocaust-denial on his platform, saying he did not believe that posts should be taken down for people getting it wrong. The debate continued for two years until 2020, when only after months of protest he finally decided to remove that abhorrent content.

In what world do we live where overpaid executives running around in their jeans and sneakers are allowed to make decisions on the hoof about how their platforms should be regulated without being held to account for their actions?

Mr David Davis

The right hon. Lady and I have co-operated to deal with international corporate villains, so I am interested in her proposal. However, a great number of these actions are taken by algorithms—I speak as someone who was taken down by a Google algorithm—so what happens then? I see no reason why we should not penalise directors, but how do we establish culpability?

Dame Margaret Hodge

That is for an investigation by the appropriate enforcement agency—Ofcom et al.—and if there is evidence that culpability rests with the managing director, the owner or whoever, they should be prosecuted. It is as simple as that. A case would have to be established through evidence, and that should be carried out by the enforcement agency. I do not think that this is any different from any other form of financial or other crime. In fact, it is from my experience in that that I came to this conclusion.

John Penrose (Weston-super-Mare) (Con)

The right hon. Lady is making a powerful case, particularly on the effective enforcement of rules to ensure that they bite properly and that people genuinely pay attention to them. She gave the example of a senior executive talking about whether people should be stopped for getting it wrong—I think the case she mentioned was holocaust denial—by making factually inaccurate statements or allowing factually inaccurate statements to persist on their platform. May I suggest that her measures would be even stronger if she were to support new clause 34, which I have tabled? My new clause would require factual inaccuracy to become wrong, to be prevented and to be pursued by the kinds of regulators she is talking about. It would be a much stronger basis on which her measure could then abut.

Dame Margaret Hodge

Indeed. The way the hon. Gentleman describes his new clause, which I will look at, is absolutely right, but can I just make a more general point because it speaks to the point about legal but harmful? What I really fear with the legal but harmful rule is that we create more and more laws to make content illegal and that, ironically, locks up more and more people, rather than creates structures and systems that will prevent the harm occurring in the first place. So I am not always in favour of new laws simply criminalising individuals. I would love us to have kept to the legal but harmful route.

We can look to Elon Musk’s recent controversial takeover of Twitter. Decisions taken by Twitter’s newest owner—by Elon Musk himself—saw use of the N-word increase by nearly 500% within 12 hours of acquisition. And allowing Donald Trump back on Twitter gives a chilling permission to Trump and others to use the site yet again to incite violence.

The tech giants know that their business models are dangerous. Platforms can train their systems to recognise so-called borderline content and reduce engagement. However, it is for business reasons, and business reasons alone, that they actively choose not to do that. In fact, they do the opposite and promote content known to trigger extreme emotions. These platforms are like a “danger for profit” machine, and the decision to allow that exploitation is coming from the top. Do not take my word for it; just listen to the words of Ian Russell. He has said:

“The only person that I’ve ever come across in this whole world…that thought that content”—

the content that Molly viewed—

“was safe was…Meta.”

There is a huge disconnect between what silicon valley executives think is safe and what we expect, both for ourselves and for our children. By introducing liability for directors, the behaviour of these companies might finally change. Experience elsewhere has shown us that that would prove to be the most effective way of keeping online users safe. New clause 17 would hold directors of a regulated service personally liable on the grounds that they have failed, or are failing, to comply with any duties set in relation to their service, for instance failure that leads to the death of a child. The new clause further states that the decision on who was liable would be made by Ofcom, not the provider, meaning that responsibility could not be shirked.

I say to all Members that if we really want to reduce the amount of harmful abuse online, then making senior directors personally liable is a very good way of achieving it. Some 82% of UK adults agree with us, Labour Front Benchers agree and Back Benchers across the House agree. So I urge the Government to rethink their position on director liability and support new clause 17 as a cross-party amendment. I really think it will make a difference.