Arrow DownArrow ForwardChevron DownDownload facebookGroup 2 Copy 4Created with Sketch. linkedinCombined-Shape mailGroup 4Created with Sketch. ShapeCreated with Sketch. twitteryoutube
In partnership with:
Read our papers on machine learning

Invariant-equivariant representation learning for multi-class data

A representation-learning technique to separate information about the categories in data versus the continuous set of styles. Published in ICML.

JUNIPR: a Framework for Unsupervised Machine Learning in Particle Physics

A natural and interpretable approach to unsupervised learning with widespread applications in particle physics. Published in European Physical Journal.

Binary JUNIPR: an interpretable probabilistic model for discrimination

A demonstration that JUNIPR provides both interpretability and state-of-the-art performance in the critical classification tasks of the field. Published in Physical Review Letters.

Learning disentangled representations with the Wasserstein Autoencoder

A simpler method to learn disentangled representations. Published in ECML.

Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders

An approach to learning complex (deep, hierarchical) data representations without the need for specialised model-architecture tuning.

Gaussian mixture models with Wasserstein distance

A concrete demonstration of how Optimal Transport can help when modelling with discrete latent variables. Published in ACML.

Improving latent variable descriptiveness with AutoGen

Language is so complex that latent variable models in natural language processing often don’t represent information meaningfully. We show that a commonly-applied tweak can be interpreted as a novel, well-justified probabilistic model. Published in ECML.

Read our papers on AI safety

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

A mathematically justified method for incorporating causality into Shapley values, overcoming a fundamental limitation of current day explainability. Published in NeurIPS.

Shapley explainability on the data manifold

A demonstration of how to calculate Shapley values, the most commonly used method to explain black-box models, without making the flawed assumption that features in the data are independent, developing the first general, scalable approach to correctly calculating Shapley values. Published in ICLR.

Human-interpretable model explainability on high-dimensional data

A framework for explaining black-box models in terms of the high-level features (e.g. people’s characteristics, objects) that humans care about, rather than the low-level features (e.g. pixels) that the models use.

Explainability for fair machine learning

The first demonstration of directly explaining bias in AI models. Submitted.

Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy

A method to share high-dimensional data with the powerful guarantees of Local Differential Privacy. Submitted.

To find out more about what Faculty can do
for you and your organisation, get in touch.