Software is eating the world. So Marc Andreessen claimed almost 10 years ago when he spoke about how all major business activity was being built into software, and organisations that embraced this change were becoming more efficient and building a strong and sustainable competitive advantage (e.g. Google, Amazon, Facebook).
It may not have been obvious when he wrote it, but it’s hard to argue against now.
I believe we are at the beginning of the next paradigm shift, Software 2.0 (credit to Andrej Karpathy).
Software 1.0 is classical software where developers write explicit instructions telling the machine how to act. For example, if this thing happens, then do that, or for every item in this list do this thing. These programs can be simple or can get very complex and give us some of the wonderful technology that we now take for granted, such as email, global payment systems and online booking. Ultimately, these programs are restricted to the instructions and actions the human software developer specifies.
In contrast, Software 2.0 is powered by Artificial Intelligence (AI), where, instead of writing explicit instructions, data scientists use data to learn the behaviours and actions that power the system. Traditional code is still important to define the structure and architectures, but it’s the trained model created from feeding in data that really defines the software.
At Faculty, we are pioneering both Software 2.0 and AI.
Making AI real
Since the business was founded in 2014, our world-leading data scientists have completed over 300 AI projects, many of which are detailed here. We have built software (2.0) that is powering many businesses and helping to improve and speed up decision-making and drive improvements to products and services.
We are in the middle of the AI paradigm shift. We see this shift in all the project work we undertake and through working with other data science teams. It’s a challenge to make the shift – most of the world still runs on Software 1.0.
To effectively deliver value from AI and to support the building of Software 2.0 we need to provide data scientists with the right environment, the best infrastructure and the latest cutting-edge tooling. Software engineers already have incredible tools that help them be more productive and build reliable, maintainable and scalable systems (such as IDEs, debuggers and testing frameworks). It’s now essential that data scientists have equivalents.
The data science workflow has similarities to the software development lifecycle, but also fundamental differences. Data science is far more iterative and requires experimentation as data scientists explore and understand the problem space and test different modelling techniques. On top of this the artefact that is ultimately produced is a model, which is more than just code; it is the data that trained it and the environment where it was created.
Software 2.0 tooling needs to support the entirety of the data science workflow. The image below shows some of the stages in the workflow and what is needed to support them.
We’ve built the Software 2.0 workflow into Faculty Platform, where by providing cutting-edge AI tooling and infrastructure it empowers data science teams and gives them the independence to quickly and reproducibly build and deploy AI models.
We initially engineered the platform to solve our own challenges of scaling a data science team, but since publicly launching in 2017 we’ve seen data science teams around the world adopt it and use it to significantly increase their productivity. We are also very proud that Faculty Platform is being used by the University College London data science Masters programme to help with educating the next generation of data scientists.
As more organisations make the shift into the AI era, there is a strong need for the field to mature. This is especially true in the verification and testing of AI models. AI is being used by critical systems in the real world, and it’s essential that the success and failure scenarios are deeply understood.
With advanced AI techniques, such as deep learning, most data scientists are unable to explain why their models are making predictions, let alone if they are fair, respect individual privacy or are robust to changes in the underlying data. Regulated industries are likely to come up against this problem first, but this is a broadly applicable challenge that must be taken seriously by anyone building Software 2.0 and wanting to make AI real. This was a topic we covered in a recent paper on Arxiv and that we explained in a recent blog.
While providing the software that makes data science possible is a crucial part of making AI real, our experience shows that this needs to be combined with the right strategy and skills for the technology to be successfully deployed today. The right strategy is needed to make sure a new technology like AI actually delivers real business impact, and the right skills ensure that internal teams are able to get the most out of AI.
Artificial Intelligence is an important and exciting technology, and its influence in the world will only grow. Stop talking about the AI future and start being part of it.
To bring Andreessen’s quote into the AI era: Software 2.0 is eating the world.
If you want to find out more about how Faculty can help you in your Software 2.0 journey, please get in touch. I’d love to speak with you.