January 2025
Starting from:
£99 + VAT
Format: DOWNLOADABLE PDF
This conference focused on next steps for child online safety regulation, policy and practice in the UK.
Key stakeholders and policymakers discussed the development of new regulation under the Online Safety Act, including assessing Ofcom’s recently published Illegal Harms Codes and guidance.
Delegates assessed the content of the new Codes and practicalities for enforcement, as well as priorities for future updates to guidance after the omission of measures for dealing with suicide and self-harm material. The roles and responsibilities of online service providers were also discussed, as well as support needed for smaller platforms to be able to conduct mandatory risk assessments by March 2025 and demonstrate subsequent compliance.
The conference was also an opportunity to examine the way forward for protecting against child sexual abuse material, with a further consultation on expanding the Codes, looking at account banning, hash matching and the use of AI in tackling CSAM, expected in spring 2025.
We expected discussion on the strategic priorities for online safety laid out by the Secretary of State for Science, Innovation and Technology in November 2024, and consideration of issues raised during Ofcom’s consultation on Protecting Children from Harms Online. Discussion focused on the role of education providers, new approaches to improving children’s media literacy, and the relationship between offline and online spaces.
The agenda looked at implications of moves in the United States to end fact-checking within some social media platforms, and latest developments in addressing practicalities of implementing age verification requirements. Identifying and classifying content, and issues around attention-retaining features that may negatively impact children, were also discussed. Sessions assessed next steps for safeguarding against harmful content, including embedding safety by design, future regulation and responsibilities relating to livestreaming sites, and challenges around moderating livestreams and homepage feeds.
Considerations around the balance between freedom of expression and online safety were discussed, particularly in the context of identifying, classifying and moderating journalistic content. Attendees also discussed privacy considerations and the provision of information for age verification purposes, to emergency services and through the legal system.
The potential positive contribution to online safety of emerging technologies - and potential threats emanating from them - were explored, looking at the adequacy of current legislation in regulating AI’s effects on internet safety and the recently announced crackdown on sexually explicit deepfakes. We also expected discussion on next steps for innovation to support child protection and the development of automated detection, and areas for further research following Ofcom’s Evaluation in Online Safety: A Discussion of Hate Speech Classification and Safety Measures, published in March 2024.
Further sessions focused on enhancing international cooperation to protect children online, with recent commitments to international regulatory consistency, looking at opportunities for sharing expertise and next steps for private-public collaboration.
We are pleased to have been able to include keynote sessions with: Michael Tunks, Principal, Online Safety, Ofcom; Owen Bennett, Head, International Online Safety, Ofcom; Martin Harris Hess, Head, Sector for the Protection of Minors Online, European Commission; Prof Sonia Livingstone, Professor, Department of Media and Communications, LSE; Ben Bradley, Government Relations and Public Policy Manager, TikTok; and Till Sommer, Vice President, Clarity.
Overall, areas for discussion included:
- policy and regulation: implementation of Ofcom’s Illegal Harms Codes and guidance - key considerations for the Government’s strategic priorities for online safety - Ofcom’s consultation and next steps - options for strengthening the Online Safety Act - addressing key stakeholder concerns
- responsibilities: roles of social media platforms, content providers and ISPs in tackling online harms and misinformation - risk assessment and liability - practicalities and challenges for sector stakeholders - transparency and accountability
- regulatory balance: challenges in moderating journalistic content - considerations for freedom of speech - ensuring child protection
- specific issues: impacts of screen time-extending features - recommender systems - body-image and depressive content - mitigation strategies
- child media literacy: new educational approaches - roles of industry, families, schools and policymakers
- AI and emerging technologies: opportunities and threats in online safety - adequacy of current legislation - innovations in automated detection
- implementation: best practice and practical steps for stakeholder joint-working - addressing technology-amplified threats - moderating livestreams and feeds
- privacy and data sharing: considerations in providing information to authorities - balancing privacy with safety - legal and ethical implications
- international cooperation: building key global relationships - opportunities for UK leadership - regulatory consistency - private-public partnerships
All delegates were able to contribute to the output of the conference, which will be shared with parliamentary, ministerial, departmental and regulatory offices, and more widely. This includes the full proceedings and additional articles submitted by delegates. As well as key stakeholders, those that attended include officials from DBT; DfE; DSIT; DfC, NI; HM Treasury; DoE, ROI; GLD; Home Office; Ofcom; DCEDIY, ROI; The Scottish Government; and the Welsh Government.