Deploying Machine Learning models as Microservices
Data Scientists have traditionally been focused on the model building process using machine learning algorithms. Yet, such models are just one part of the actual production environment. Increasingly, Data Scientists are expected to consider how their models can be deployed successfully in production. With the focus on Cloud computing, a new architecture style, called microservices, started to emerge.
The idea of microservices is to have self-contained services that can be scaled and managed independently. However, there is still a gap in the knowledge and experience among Data Scientists about how such deployments can be achieved. This talk will introduce how machine learning models can be deployed as containerised applications. A container is a standard unit of software that packages up the code and all its dependencies thus enabling the application to run reliably from one computing environment to another.