- New consortium launches to transform data access and support safety tech companies to build tools to identify and remove harmful content online.
- Consortium comprised of leading experts from the Online Safety Tech Industry Association (OSTIA), Faculty, and PUBLIC.
- Initiative supported by over 20 organisations that includes safety tech providers, social media platforms, and NGOs.
The Online Safety Data Initiative launches today, bringing together expertise from suppliers Faculty, OSTIA and PUBLIC, and a range of government, academic, and civil society stakeholders. The initiative will drive innovation in the safety tech sector by providing companies with access to the vital data needed to develop world-class safety tools to identify and remove harmful content online.
Research published by multiple academic and civil society groups shows that the scale of harmful and illegal online activity and content increased significantly in 2020, as a result of the social isolation caused by the global COVID-19 pandemic. Online harms, such as terrorism, child sexual exploitation and abuse, hate speech, disinformation and advocacy of self harm and suicide, often target vulnerable indiviudals and threatens the security of individuals and our nation.
During the Government’s consultation on the Online Harms White Paper, stakeholders within the UK safety tech sector identified access to the required data as the single biggest barrier to developing innovative solutions to address online harms. This project will examine how unlocking access to relevant data can help to drive innovation and competition in safety technology.
Running over 15 months, the project will test methodologies for improving access to datasets that can be used for training Artificial Intelligence (AI) solutions to remove harmful and illegal content and networks. It will seek to understand why the decentralised hosting of online harms data prevents companies from developing technology to tackle the problem, and then identify and prototype some of the most promising solutions.
The consortium holds itself to high security standards and places great emphasis on confidentiality, integrity and transparency. As part of this project, it will be working closely with the Centre for Data Ethics and Innovation (CDEI) to establish a cross-sector independent advisory group to provide additional insight, challenge and transparency.
Drawing on the expertise of its team of over 50 PhDs and experience from working with over 230 customers, Faculty will be responsible for leading the development of novel data science prototype projects to test new and transformative approaches to making online harms data safe and secure to access for safety tech companies.
PUBLIC will be responsible for leading on the discovery phase of the project, during which it will work with stakeholders across the Safety Tech sector to understand data needs and opportunities.
OSTIA will help to engage the collective knowledge, experience and capability of its members, and provide perspective from Safety Tech companies that are at the forefront of the fight against online harms.
Together, this project will be delivered and supported by over twenty organisations, including leading social media platforms, safety tech SMEs and NGOs.
Marc Warner, CEO & Co-Founder, Faculty said: “Artificial intelligence is an incredibly powerful tool with huge potential to make large and fast-moving challenges such as online harms more manageable. To build AI that performs effectively and safely in the real world though, you will always need access to real world data. We’re delighted to partner on this initiative that helps provide access to data for good; for those that need it to be able to build tools which ultimately make the internet a safer place for everybody.”
Ian Stevenson, Chair, OSTIA, said: “The UK Safety Tech sector is already well placed to help build a safer internet, and those of us working in the sector have all encountered difficulties in accessing the data needed for research, development, training and testing. This co-ordinated response has the potential to fuel new solutions to crucial problems in online safety. While the internet and the online safety solutions this project will drive are digital, the effects of online harms are all too human. Ultimately, this project is a path to preventing abuse, improving wellbeing, and even saving lives, and we’re delighted to be partnering to deliver it.”
Andy Richardson, CTO, PUBLIC said: “The Safety Tech sector includes some of the UK’s fastest growing, most innovative companies, using cutting-edge technology to tackle some of the most urgent challenges society faces today; this project is a unique opportunity to break down the barriers that might stop these companies from building safer online spaces for everyone. We’re delighted to partner with DCMS, Faculty and OSTIA on this and look forward to engaging further with all those who care about this issue.”
Andy Burrows, Head of Child Safety Online Policy at the NSPCC, said: “This project will help to overcome the barriers that many safety tech firms face when developing new AI products. If companies can access datasets more easily to test their products, the result will be a range solutions that make children safer and that enable platforms to meet their Duty of Care to users.
“The NSPCC is therefore hugely supportive of this work, and we look forward to the project getting underway.”
Mary Aiken, Professor Forensic Cyberpsychology said: “the UK is leading worldwide when it comes to recognising online harms and supporting online safety technology or ‘SafetyTech’ solutions. I very much welcome the opportunity to work with DCMS, Faculty and Ostia on the Online Safety Data Initiative, specifically regarding innovations to tackle technology-facilitated harms such as cyberbullying, harassment, self-harm, child sexual exploitation and abuse, along with hate speech and mis or disinformation. Trust and transparency are critical in order to realise the value of data, transform its use and drive innovation. Government, civil society and individuals should work together to ensure that data-driven technologies are a force for good, a force that could help all of us to shape and create a safer and more secure cyber society.”
Lydia Grace, Online Harms Programme Manager at Samaritans said: “We know harmful content relating to self-harm and suicide is far too easily accessible, so it is critical that sectors come together to reduce access to potentially harmful content. It is essential that data is available to inform AI that can effectively detect and respond to harmful content in order to protect vulnerable users. We are excited to be supporting this initiative to improve access to data that can inform the development of effective tools to help users access the benefits of the online environment, whilst being protected from harm.”