AI is increasingly making crucial decisions with no human intervention.
But, without human oversight, how do we ensure that an algorithm will behave fairly, accurately, and consistently outside of the lab, in the real world?
If we’re going to trust artificial intelligence to make these decisions, new checks and balances will be needed. We must be able to estimate uncertainties of predictions and be certain that our models won’t be detailed by erroneous inputs and parameters.
In this webinar, Faculty Data Scientist Tobias Schwedes will demonstrate:
- How non-robust models can warp prediction accuracy and how to mitigate the resulting issues.
- How to know if you can trust your model’s predictions.
- How Faculty’s AI Safety tools can help assess model robustness and revise a non-robust model into a robust one.
No prior knowledge of AI/coding is necessary for this webinar.