Machine Learning: Accelerating Your Model Deployment – Part 2

[rt_reading_time label=”Read Time:” postfix=”minutes” postfix_singular=”minute”]

As machine learning initiatives become more prominent across companies looking to leverage their data to improve future projections and decision-making, demand for frameworks that simplify ML model development has been soaring. In part 1 of this series, we looked at the challenges faced in ML model development and deployment that have resulted in the failure of more than 25% of AI & ML initiatives, as noted by IDC. We also discussed some options to improve the speed and ease of ML model development, from tools such as the Amazon SageMaker stack to the concept of enhancing operational efficiency across organizations.

In the second part of this series, we will take a look at Rackspace Technology’s Model Factory Framework(MLOps) and how it improves efficiency and speed across model development, deployment, monitoring, and governance.

End-to-End ML Blueprint

As we discussed earlier, a large variety of tools and frameworks exist within the Data Science/Machine Learning universe. When in development, ML models flow from data science teams to operational teams, and these preferential variances can introduce a large amount of lag in the absence of standardization.

The Rackspace Technology Model Factory Framework provides a model lifecycle management solution in the form of a modular architectural pattern built using open source tools that are platform, tooling, and framework agnostic. It is designed to improve the collaboration between data scientists and operations teams so that they can rapidly develop models, automate packaging, and deploy to multiple environments.

The framework allows integration with AWS services and industry-standard automation tools such as JenkinsAirflow, and Kubeflow. It supports a variety of frameworks such as TensorFlow, scikit-learn, Spark ML, spaCyPyTorch, etc., and can also be deployed into different hosting platforms such as Kubernetes or Amazon SageMaker

Benefits of the Model Factory Framework

The Model Factory Framework affords large gains in efficiency, cutting the ML lifecycle from the average 15+ steps to as few as 5. Employing a single source of truth for management, it also automates the handoff process across teams, simplifies maintenance, and troubleshooting.

From the perspective of data scientists, the Model Factory Framework makes their code standardized and reproducible across environments, enables experiment and training tracking, and can result in up to 60% of compute cost savings as a result of scripted access to spot instance training. For operations teams, the framework offers built-in tools for diagnostics, performance monitoring and model drift mitigation. It also offers a model registry to track models’ versions over time. Overall, this helps the organization improve their model deployment time and reduce effort, accelerating time to business insights and ROI.

Overview of the Framework

The Model Factory Framework employs a curated list of Notebook templates and proprietary DSLs, simplifying onboarding, reproduction across environments, tracking experiments, tuning hyperparameters, and consistently packaging models and code agnostic to the domain. Once packaged, the framework can execute the end-to-end pipeline which will run the pre-processing, feature engineering and training jobs, log generated metrics and artifacts, and deploy the model across multiple environments.

Development

The Model Factory Framework supports multiple avenues of development, users can either develop locally, integrate with Notebooks Server using Integrated Development Environments (IDEs), use Sagemaker Notebooks or they may even utilize automated environment deployment using AWS tooling such as AWS CodeStar.

Deployment

Multiple platform backends are supported for the same model code and models can be deployed to Amazon SageMaker, Amazon EMR, Amazon ECS, and Amazon EKS. Revision histories are tracked, including artifacts and notebooks with real-time batch and streaming inference pipelines.

Monitoring

Model requests and responses are monitored for detailed analysis which enables the ability to address model and data drift.

Governance

Data and model artifacts are clearly separated and access can be controlled using AWS IAM and bucket policies that control model feature stores, models and associated pipeline artifacts. The framework also supports rule-based access control through Amazon Cognito, traceability with Data Version Control as well as auditing and accounting through extensive tagging.

To learn more about the features and benefits of the Model Factory Framework from development to deployment and to learn the model registry component that offers a centralized view to manage models through their deployment lifecycle, download our whitepaper.

Using a combination of proven accelerators, AWS native tools, and the Model Factory Framework, companies can experience significant acceleration in model development automation, reducing lag and effort and experiencing improvements in time to insights and ROI. If your organization is interested in utilizing the Model Factory Framework for your ML use cases, get in touch with our machine learning experts today!

Hidden layer

Share on linkedin
Share on twitter
Share on facebook
Share on email

Onica Insights

Stay up to date with the latest perspectives, tips, and news directly to your inbox.

Explore More Cloud Insights from Onica

Blogs

The latest perspectives on navigating an ever-changing cloud landscape

Case Studies

Explore how our customers are driving cloud innovation in their industries

Videos

Watch an on-demand library of cloud tutorials, tips and tricks

Publications

Learn how to succeed in the cloud with deep-dives into pressing cloud topics