Arrow DownArrow ForwardChevron DownDownload facebookGroup 2 Copy 4Created with Sketch. linkedinCombined-Shape mailGroup 4Created with Sketch. ShapeCreated with Sketch. twitteryoutube

Frontier decision-making is here

Find out more

Tech Talk: Privacy in AI Safety

Many organisations find themselves on the brink of transformative AI adoption, only to be held back because they are missing one crucial capability: data privacy. AI applications contain sensitive information, and hence any data access should be strictly regulated. Many real-world datasets – particularly in sectors like healthcare – contain highly sensitive information, which must be protected at all costs.

In the best case scenario, this barrier to entry leads to increased costs and timescales for deploying AI. In the worst case scenario, it can entirely prevent organisations from adopting – and benefiting from – AI technology. 

In this talk, Faculty Data Scientists Mark Worrall and Christian Donnerer will:

  • Show how differentially-private synthetic data can solve this challenge by generating synthetic data that retains the utility of the original data, while protecting the privacy of individuals. 
  • Present our method for synthetic data generation using Variational Autoencoders
  • How we have used this method to generate synthetic, high-utility, completely private data from a selection of real world datasets.

We believe that this approach can unlock the value of many real world datasets without compromising privacy – and, in the process, make AI safe, private, and accessible for a much wider range of companies.

This event has finished

date & time

Thursday 14 May, 16.00 GMT

To find out more about what Faculty can do
for you and your organisation, get in touch.