Machine learning in production is different to ML in R&D environment. Often some issues only arise when deploying ML solutions to production or to staging environments. So how can you test your ML models successfully before even reaching a production stage? This session will present a number of techniques to test your ML quality and decay in both your R&D and production environments appropriately. We will present examples of issues commonly encountered in the ML area and how to test and monitor your data, model development and infrastructure.
Description
Using machine learning in real-world applications and production systems is a very complex task involving issues rarely encountered in toy problems, R&D environments, or offline cases.
Testing, monitoring, and logging are key considerations for assessing the decay, current status and production-readiness of machine learning systems. But how much of these is enough? Where do we even get started? Who should be responsible for the testing and the monitoring? How often have you heard the phrase “test in production” when it comes to machine learning? If your answer is: “too often” perhaps you need to change your strategy.
We will cover some of the most frequent issues encountered in real-life ML applications and how we can make our systems more robust. We will also consider a number of indicators pointing to decay of models or algorithms in production systems.
By the end of the talk, the audience will have a clear rubric with actionable tests and examples to ensure that the quality or a model in production is adequate. It will also provide valuable guidelines to help engineers, DevOps, and data scientist to evaluate and improve the quality of ML models before they even reach production stage.
Who and why?
Data scientists, machine learning / reseach engineer and data engineers working with Machine Learning at scale or in real life systems. Any software engineer working in data-intensive applications.
Basic knowledge of data and machine learning is needed.
By the end of the talk, the attendees should have a set of actionable tasks that will help them to improve the quality of their machine learning models in production.