Companies will be categorised into two tiers according to the size of their online presence and the level of risk posed on the platform. The long-awaited Online Safety Bill could also see online platforms being blocked in the UK if they do not abide by a duty of care to their users, particularly children and vulnerable people.
The legislation, which the Government will bring forward in 2021, means Ofcom can fine firms up to £18m or 10% of global turnover, whichever is higher.
Category 1 companies will likely include large household-name social media companies. As well as the duty to address relevant illegal content and content which is harmful to children, Category 1 companies will also be under a duty to take action against content which, while strictly legal, may be harmful. This will not a requirement of Category 2 companies.
The legislation will define content as harmful where: ‘it gives rise to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals’
Category 1 companies will also be under a legal requirement to publish transparency reports on the measures they have taken to tackle online harms. Companies will be able to fulfil their duty of care by complying with statutory codes of practice published by Ofcom. This will involve the implementation of systems and processes to improve users’ safety, such as specific user tools, content moderation, recommendation procedures, and reporting and redress mechanisms.
The UK Government is promising to publish interim Codes of Practice on terrorism and child sexual exploitation and abuse which, whilst voluntary, will help companies to understand the changes they need to make before the publication of the statutory Codes of Practice.
Larger web companies such as Facebook, TikTok, Instagram and Twitter that fail to remove harmful content such as child sexual abuse, terrorist material and suicide content could face huge fines – or even have their sites blocked in the UK.
They could also be punished if they fail to prove they are doing all they can to tackle dangerous disinformation /fake news about coronavirus vaccines.
Under the new rules, the most popular social media sites will be expected to set and enforce clear terms and conditions which explicitly state how they will handle content which is legal but could cause significant physical or psychological harm to adults.
One measure announced to address concerns regarding freedom of expression is that companies will need to implement effective complaint mechanisms to enable users to object if they feel that their content has been unfairly removed. These new online laws will (probably) not cover articles or comment sections on news websites. The new rules are designed to protect visitors to sites which allow users to post their own content or interact with others. The Government is also considering whether to make the promotion of self-harm illegal.
There is widespread concern that when news is accessed via social media or search, internet giants will try to protect themselves from draconian penalties by setting their algorithms to censor content which is controversial but legitimate, such as criticism of Government handling of the Covid crisis.
Critics also say that commercial organisations should not have the power to decide what news the public can read in social media news feeds, if it comes from legitimate news organisations.
What are the sanctions for non-compliance?
Ofcom, the media and communications regulator, will be responsible for enforcing the rules and will have the power to impose fines for non-compliance, of up to 10% of a company’s annual turnover or £18 million (whichever is higher).
The regulator may also take enforcement action to require providers to withdraw access to key services. For serious failures of the duty of care, Ofcom has the power to entirely block a company’s services from being available in the UK.
Earlier proposals also included the possibility of imposing criminal penalties on senior executives for failure to comply with the duty of care generally. While the government has chosen not to pursue such broad sanctions, it has decided to include a power for the government to impose criminal offences at a later date in secondary legislation where senior managers ‘fail to respond fully, accurately, and in a timely manner, to information requests from the online harms regulator’.
The power will expire after two years, and the government has said it will only exercise it if, on review of the new regime in the first year, it was apparent that industry hadn’t complied with their new information sharing requirements.
Under the new Online Safety bill, businesses will have a new ‘duty of care’ to protect children from cyberbullying, grooming and pornography.
However there are also concerns the legislation does not go far enough to protect legitimate news on the internet, and social media news feeds could be censored by tech giants.
Conclusion
The 10% figure for fines will clearly be the headline from this latest development, as this dwarfs even the most serious GDPR fines. Nevertheless, the full response is largely a confirmation of earlier proposals. Despite some delays, we now have some more certainty as to timings, with the introduction of a Bill promised in 2021.
There will still be much to be debated when the Bill is published. There are very serious questions of free speech and technical implementation to be addressed in relation to the Government’s intention to police lawful content, an avenue that the European Commission is avoiding in its plans for the Digital Services Act, further details of which were published on the same day.
This will put a post-Brexit UK out of kilter with the EU and give rise to billions of moderation headaches.
The post Online harms: UK government confirms new Online Safety Bill appeared first on Northamptonshire Law Society.