Automated Model Deployment Pipeline v1.0

About the project:

In this project, I significantly enhanced our model deployment process, which had been a time-consuming and labor-intensive task requiring about a week of work per engineer for each model deployment. Initially, the task involved establishing a formalized Model Life Cycle process. Subsequently, I designed and implemented an Automated Model Deployment pipeline that leveraged our team-developed, proprietary Artifact Registry. Our data scientists would train models and log them to MLFlow. Once logged, my automated process would retrieve the model and associated feature set artifacts from the underlying S3 location corresponding to the experiment on the MLFlow server. The artifacts would then be converted into packages compatible with our Artifact Registry. Following the conversion, the artifacts would be uploaded to the registry and initial tests would be conducted in the Development environment. Only after all checks, balances, and model validation steps were successfully passed, the model would then be deployed to the Staging environment. Here, our decision science team could validate the model scores. Upon successful verification of all tests and checks in the Staging environment, the model would be deployed to Production. Once in Production, a script on the Airflow server would identify and pick up the new model, decommission the old model, and automatically generate a DAG scheduled to run the next day. This newly streamlined process, which previously took up to a week, was now completed within hours. Furthermore, this process is parallelizable, allowing for the simultaneous deployment of multiple models. This represented a significant boost in efficiency and productivity for our team.


Technology used:

Python, Kubernetes API, MLflow, AWS, S3, Airflow