Arrow DownArrow ForwardChevron DownDownload facebookGroup 2 Copy 4Created with Sketch. linkedinCombined-Shape mailGroup 4Created with Sketch. ShapeCreated with Sketch. twitteryoutube

Press release

Faculty brings on artificial intelligence pioneer Professor Stuart Russell as advisor for AI Safety

World-leading AI specialist, Faculty, today announces that Stuart Russell, Computer Science Professor at the University of California, Berkeley and renowned pioneer of modern AI will become a special advisor. Russell will help steer Faculty’s AI Safety research and development that aims to fuse the latest academic research and thinking with commercial solutions that ensure AI’s safe deployment in the real world.

The Faculty research lab explores the frontier of AI through the publication of research papers and the development of new technology. While AI Safety concerns are wide-ranging, Faculty’s main focus has been on building first-of-its-kind practical tooling for its data science platform to help organisations mitigate the dangers of black-box algorithms and make AI safe today. This means ensuring that the decisions made by machine learning models can be understood (explainability), are unbiased (fairness) and working the way they were intended (robustness).

Russell brings deep experience to Faculty’s safety programme. He is well-known in the field for writing the definitive textbook AI: A Modern Approach, which is used by over 1,400 universities in 128 countries. He has also recently published Human Compatible, which has been called ‘the most important book on AI so far’. He has spoken extensively on the importance of safety considerations for AI and its future development, and sits on the Advisory Board of the Centre for the Study of Existential Risk at Cambridge University.

Professor Stuart Russell said: 

I’m delighted to be working with the world-class team at Faculty on the critically important issue of AI Safety in the real world. Financial, employment, health, and social security decisions are increasingly being made by algorithms, yet few data scientists can say with certainty that their algorithms are safe. I will be drawing on my research and experience to advise the team and help them to ensure that the public and private algorithms deployed are fair, explainable, and robust.”

Marc Warner, CEO & Co-Founder of Faculty said:

“Stuart is the world’s leading AI Safety researcher and literally wrote the book on modern AI. He has been an inspiration to me personally, and we are delighted to be working together. With Stuart’s guidance and expertise we believe our AI Safety research and applied work will empower organisations of all kinds to make the most of AI in a way that is accountable and explainable as well as profitable.” 

Faculty’s AI Safety work also extends into further-reaching concerns about AI’s future development. Faculty collaborates with top universities Harvard and UCL and has recently published papers on Asymmetric Shapley Values;  Invariant-equivariant representation learning for multi-class data; Parenting: Safe Reinforcement Learning from Human Input; and JUNIPR: an interpretable probabilistic model for discrimination the former of which was presented at ICML in June 2019. 

The growing research lab is headed up by Faculty’s Dr Ilya Feige and draws upon over 45 PhD experts across Faculty to deliver the company’s cutting-edge R&D and channel this innovation into real world applications for clients.

To find out more about what Faculty can do
for you and your organisation, get in touch.