Scientific collaborators from Harvard University and UC Berkeley.
Faculty collaborated with two leading US universities on a set of scientific research papers using machine learning (ML) to distinguish between fundamental particles at the Large Hadron Collider at CERN.
To run the ML experiments required to prove the utility of their methodology, the team of researchers needed both a huge amount of compute to run hundreds to thousands of experiments, as well as the flexibility to configure their servers with modern open-source machine learning libraries.
Initially, the team started out with access to one of the university’s clusters with large amounts of compute, but discovered that due to configuration difficulties, a slow queueing process, and lack of portability hindering collaboration, running experiments was very prohibitively slow. In four months, little progress was made on the research project.
The team opted to use Faculty’s data science platform to help speed up the process of running the experiments.
By using Faculty Platform they were able to run the requisite amount of experiments in less than a month. The platform allowed the team to easily configure each server by providing access to elastic compute that meant there were no queues. The team also had fast access to the most recent and cutting-edge ML libraries that came pre-installed, or that they could install themselves as needed.
Finally, the platform provided the international research team (working in Florida, California, Boston, and the UK) with a real-time collaborative environment for running the experiments and sharing the results quickly and easily.
By using Faculty Platform the team of researchers were able to complete their research resulting in state-of-the-art performance and a physically interpretable machine learning model. The paper came out in June 2019 and can be found here.