As a great number of today’s organizations increasingly design and implement software using data and AI solutions, a growing community has emerged with heightened awareness of the challenges in transitioning a product from development to production. This shift emphasizes the need for models to not only function effectively, but to meet the specific needs of the organization. This means building systems that allow the organization to produce and deliver them efficiently, hardening them against failures, enabling recovery from any possible malfunction or system breakdown, and most significantly, all of this should be done within a learning cycle that enables the organization to grow and make improvements from one project to another.
In Forbes' post it has been highlighted that deploying AI models in the real world can be very challenging. Many models fail to produce consistent results and may even degrade over time, leading to disappointment for businesses that expected a greater return on investment. Forbes emphasizes on some conspicuous examples of ML-driven systems which gave prominence to failures, e.g., some well-performing models for detecting malicious URLs demonstrated a sharp performance’s decline. In addition, it emphasizes on the need for a robust approach to data quality management and MLOps to assure the successful deployment of ML models.
The Journey Begins: Uncovering the World of Reliable Data & ML Lifecycles
This blog post is only the beginning of a whole storyline about reliable Data & ML solutions, where we will unfold all the pieces along an end-to-end ML lifecycle. In particular, we present the current state of the market and its challenges by highlighting some important pain points along the way. Afterwards, we outline a proposed approach of the digital highway for MLOps to address the aforementioned question, where we also illustrate how organizations can generate sustainable value by building and running reliable machine learning solutions.
Read this blog post to learn more about our view on the end-to-end data and ML lifecycle, the most relevant concepts and best practices. Then also stay tuned for the upcoming blog posts that will continue the story of the end-to-end ML lifecycle by diving deeper into each part!
Current state of the market
According to the BARC report and their annual state of AI for 2022, businesses must embrace AI as a means to stay competitive in today's market. AI can be used to automate mundane tasks, improve customer experience, and drive new revenue streams. Additionally, companies must have a solid strategy in place for implementing AI, including having the right talent, data, and infrastructure. Furthermore, organizations must continuously evaluate and improve their AI initiatives to stay ahead of the competition.
However, in the article it’s clearly highlighted that the organizations have to face more challenges in deploying ML models into production rather than in developing them. The majority of organizations surveyed (55%) have not yet deployed an ML model, and only a small percentage (10%) consider themselves capable to succeed in this area. To ensure the proper functioning of ML models in production, it is important to have a solid understanding of DataOps and MLOps to anticipate and mitigate potential issues, because these concepts lead to a more successful deployment with users being 3.5 times less likely to encounter extensive intricacy.
The article concludes that the organizations, which effectively leverage AI, will be at the forefront of innovation and have a significant advantage in the marketplace. Moreover, companies that are familiar with the concepts of DataOps and MLOps have a clearer understanding of what can be achieved with machine learning and can better plan their projects. To clarify the conceptual challenges associated with DataOps and MLOps, we aim to provide our understanding of these two concepts right away.
Overcoming Challenges: Navigating the Path to Success
The challenges faced by organizations implementing AI vary depending on the stage of implementation. At an early stage of new AI projects, the primary challenge reported is demonstrating the value of AI to the business. As the projects progress and organizations try to expand their use of AI, managing the risks associated with AI, obtaining support from executive leadership, and ensuring ongoing maintenance become the primary impediments. The latter one denotes that the businesses should sustain the production of beneficial outcomes instead of generating meaningless information after initial launch, which also entails an immense need for sufficient funds.
The Power of "Ops" in DataOps and MLOps: Streamlining Operations
MLOps encompasses not only the development and deployment of models, enabling organizations to smoothly transition from the laboratory to practical use but also the operations aspect. In addition to the benefits of DataOps and MLOps best practices, we would also like to emphasize the potential for DevOps to leverage progress made in traditional software operation. Site Reliability Engineering (SRE), a set of principles and practices that address the challenges of operating large-scale, mission-critical software systems, has played a major role in this regard.
Having now mentioned DataOps, MLOps and DevOps as solution concepts, we understand these concepts all come with challenges.
Undoubtedly, all the aforementioned associated challenges are integral parts of the success of the businesses and, apparently, they require vigilant understanding, as well as great commitment and effort to overcome. The risk of collapse is inherent behind many steps along the way from product development to product delivery. Thence, software outages are possible to happen and can severely impact the quality of the deliverables of the organization and its credibility, as well as they can negatively affect the user experience.
All Personas in the end-to-end ML Lifecycle
Take a look at all different roles and their responsibilities.
The different personas involved in an end-to-end Machine Learning (ML) lifecycle can be affected by various challenges (see Figure 1), including difficulties in determining and specifying the business issue and making the ML solution compatible with the organization's goals and objectives. Additionally, they encounter challenges in obtaining rich, varied, and relevant data, as well as managing vast amounts of data. Furthermore, many difficulties arise in creating and executing a scalable and reliable data system, integrating multiple data sources, managing data quality and data governance issues, and automating the ML model deployment and maintenance of ML models, ensuring the models are production-ready, and dealing with issues such as model drift and model degradation.
Much of creating an organization powered by reliable data-, and AI-driven solutions requires a resolution to all the aforementioned challenges. Apparently, there is a resounding demand not only to coordinate the new technology into the organization’s processes, as more and more people are involved along the end-to-end lifecycle, but also to integrate it with other organizational systems. Henceforth, it is possible to obtain promptly all the benefits of the data and AI advancements.
The AI Journey: A Guide to Succeeding in Today's Dynamic World
The technology sector is continually developing in order to tackle the aforementioned issues through the creation of improved tools and established methods. The goal of MLAB is to assist individuals and companies in creating, constructing, and implementing their MLOps Digital Highway following the best practices. Therefore, we introduce our proposed solution for this journey, which is the “game changer” to reach this goal and reliably integrate data-, and ML-driven solutions into any business or application.
The Digital Highway consists of many important elements and is not tied to any tool, since the appropriate tool should be selected based on the specific situation. Additionally, we believe that the blueprint can be beneficial for companies of all sizes, allowing them to derive the greatest value from their data and machine-learning models. However, a holistic view is required for the implementation of the entire loop of this approach, as well as of the entire organization itself.
Let’s describe the full ML lifecycle at a very high level. The MLOps process begins by collecting and processing data, which is then utilized to train machine learning models. The optimal model is then selected and deployed for practical use, where it functions as a service for other applications. Data is continually collected through user engagement and used for re-training the ML model, completing the cycle. All versions of code, data, model and other elements are recorded and saved. Every phase of the process is closely monitored using specialized monitoring solutions.
...or else, check directly our previous blog post, where you can find a detailed explanation of the different components of our blueprint for each stage of the MLOps lifecycle.
Ready to jump on the market? Not yet…
Data Engineering as first-aid
Data Engineering encompasses a wide range of techniques and tools with the goal of ensuring that every aspect of data delivery is dependable and consistent. This includes the collection and ingestion of data from various sources, as well as the analysis of the relevant data for specific use cases which should be prioritized and pre-processed until they are ready for consumption. By implementing these techniques, every data user can easily search, access, comprehend, and utilize the data.
Nonetheless, a missing point is an approach for reliable data engineering or how it has recently become known as “Data Reliability Engineering”, a sister discipline to the before mentioned Site Reliability Engineering (SRE). This concept strives to embrace the practices of DevOps and SRE while working with unified data platforms, in order to ensure continuous and reliable software delivery without losing any of the data quality. Henceforth, we now have a solution that not only collects, analyzes, and transforms data, but also maintains high standards of data quality, flexibility, and agility.
Building Reliable Data Science
Reliable Data Science refers to the practice of conducting data analysis and developing models in a way that is rigorous, transparent, and trustworthy. This involves using methods and techniques that have been proven to be accurate and unbiased, and carefully documenting every step of the data science process so that others can understand and validate the results. A key aspect of reliable data science is ensuring that the data being used is of high quality and is appropriate for the problem at hand. Reliable data science also involves making use of robust and appropriate statistical methods, and carefully interpreting the results of data analysis and modeling. Finally, reliable data science requires ongoing monitoring and updating of models to ensure that they remain accurate and relevant over time, and to incorporate new information as it becomes available.
From Data Science to ML Engineering
The steps involved in constructing a machine learning model pipeline can vary greatly depending on the project, but there are some common procedures that are typically included. These include retrieving the necessary data from storage, performing any transformations specific to the model, and selecting and normalizing the features as necessary. Even though the ML training pipeline is the only part of our system that utilizes ML-specific algorithms, it should be easily and safely reusable by anyone, ensuring effortless implementation and minimizing any potential risks. Once a training pipeline has been validated and evaluated on new data, it will be considered for deployment as a production-ready artifact, if it is superior to the existing one.
Revolutionizing industries with reliable ML engineering includes the development of training pipelines gated by the SRE principles which tackle all the reliability challenges, both the data- and the ML-specific ones. That’s how we manage to integrate our ML system in the customer-facing environment in a continuous and reliable way.
Time to bring your value to the market
Testing & Quality Assurance
An ML model is an essential part of the final deliverable to the end users, and therefore it needs to be continuously integrated into the existing application on the customer’s side. In particular, it requires continuous retraining and then re-integration into the customer’s system to assure the model can adapt to any changes in the distribution of data over time, improve its performance, and handle any potential vulnerabilities or drift in the data for security and compliance. The primary goal of this concept is that the source code gets automatically compiled, built, and approved through numerous tests every time new code or a new ML model is committed to the version control system.
This stage is crucial, because anything that adds value to the architecture needs to have successfully passed all the quality gates and tests, otherwise, we cannot proceed to any automation. In other words, we cannot automate what does not work as intended. Henceforth, the result of this stage is pipeline components that are ready to be deployed in the next stage or released in production. Hence, we should be always vigilant about applying SRE principles and best practices at this stage for optimizing DevOps capabilities and quality assurance.
Integration, Validation & Automation
Continuous Integration (CI) and Continuous Validation (CV) are development methodologies aimed at enhancing the quality and reliability of software. Continuous Integration (CI) involves the ongoing building and testing of software every time a developer submits changes to a code repository. The purpose of Continuous Integration (CI) is to address integration problems and fix bugs before they escalate into bigger issues. This is achieved by automating the build, testing, and deployment of code, with the ultimate goal of keeping the software in a ready-to-release state.
Continuous Validation (CV), on the other hand, is the process of continuously verifying the software against specified requirements and expectations. It includes validating the software’s functionality, performance, security, and other non-functional requirements. CV helps to catch any issues early in the development process and prevents the introduction of new bugs and defects into the codebase.
The combination of CI and CV practices helps to ensure that software is of high quality and that changes are made in a controlled and predictable manner. This helps to reduce the risk of software failures, improve overall software quality, and increase confidence in the software development process.
System Design
System design is the process of defining the architecture, components, modules, interfaces, and data for a system to satisfy specified requirements. It involves the identification of the system's components, their relationships, and the interactions between them, as well as the definition of the data that will be processed and stored by the system. System design is a critical step in the development of any complex system, as it lays the foundation for the implementation and testing of the system. The design should consider the system's performance, scalability, reliability, and maintainability requirements, as well as any constraints or limitations that may impact the system's design.
Planning
The process of planning in building and deploying a successful ML model involves determining the goals, defining the scope, and allocating the necessary resources. This step helps ensure that the project is clearly defined and that all the resources and support needed are available. Essential elements of planning in the ML lifecycle include understanding the problem to be solved, setting clear objectives, identifying the data sources for training, allocating resources, and creating a timeline with specific milestones and deliverables. Risk management and cost estimation are also critical factors to consider.
More Reliability Engineering with SLO Assurance
Let’s now expand upon the concept of SRE and adapt it for use in MLOps. One application of this adaptation, for example, would be to guarantee a satisfactory level of performance for a recommendation system, or we might want model training completion within certain number of minutes for a deep neural network. Henceforth, our SRE approach introduces the service-level indicators (SLIs), service-level objectives (SLOs), and error budgets to balance the cost of delivery speed and reliability.
The SLO (Service Level Objective) Assurance is the process of ensuring that a service provider's performance meets the agreed-upon SLOs for that service. In other words, SLOs are targets or promises that the service provider makes to its customers regarding the performance and availability of a service. Considering that we always aim for the elimination of the effort required to bring new services to the end users with agility and credibility, in compliance with definite regulations, SLO assurance is an important part of managing and delivering reliable and high-quality services, which also ensures that customers receive the level of service they expect and nothing less than that.
Bridge the gap and… mind your operations
Monitoring & Observability for Data & ML
Data and ML pipelines are not the exceptions to the rule: they all have the same potential for malfunction as any other software system. Fundamentals of SRE, i.e., service-level indicators (SLIs) and service-level objectives (SLOs), are key aspects for the deployment and preservation of ML-driven applications in healthy modes. This is justified if we recall the reasons, which were mentioned above, why ML training pipelines need to get continuously retrained and integrated into customer-facing applications. However, companies can face two types of challenges when implementing MLOps - issues with the quality of the model's predictions, and issues with the system’s performance. To address the first challenge, companies can build separate data and model pipelines that allow for continuous integration and deployment. To address the second challenge, companies can use observability tools specific to MLOps, which address challenges not commonly encountered in non-ML systems. For a more detailed description of Monitoring and Observability for MLOps, you can check our observability blog post.
Continuous Delivery
Once we have all the production-ready artifacts, as described right above in the concept of Continuous Integration, we should make them available to the end users. Continuous Delivery is about continuously bringing new models, as well as potential changes to the data pipelines or application code, to the production environment. When new data is collected and selected to be consumed, all the pipelines should run automatically, and the new updates should be available to the consumers. The way that the model serving is implemented within the application pipeline and the choice of the release strategy are critical decisions in this step.
Machine Learning Architects Basel
Do you feel that you are in a phase where you aim to piece together a puzzle with endless pieces and no clear picture of how to implement reliable ML while trying to make sense of all the different software options, team roles, and systems? As we explained in this blog post, navigating the vast array of options, responsibilities, and systems can be overwhelming, like trying to find your way through a maze.
But now that you understand the importance of ensuring access to reliable data in times of uncertainty in the organizations, it is important to have the right technology, experienced personnel, and a clear strategy for your data operations. Attempting to handle it on your own can be difficult and time-consuming, from assembling the necessary technology to hiring qualified staff and implementing effective MLOps practices. Partnering with a capable and trustworthy technology partner could be a beneficial step to quickly and efficiently improve the dependability of your data-, and ML-driven systems.
Machine Learning Architects Basel (MLAB) is a member of the Swiss Digital Network (SDN). We have created an effective MLOps framework that combines our expertise in machine learning, MLOps, and our extensive knowledge and experience in DevOps, SRE, and monitoring traditional non-ML systems. This is built on our proven effective SRE approach. This effective MLOps approach places a strong emphasis on monitoring data and models as a fundamental component.
If you are keen on learning more about how MLAB can aid your organization to create long-lasting benefits by developing and maintaining reliable machine learning systems, and contribute in the implementation of a digital roadmap for your business, please contact us without any hesitation.
Stay tuned for our upcoming webinars and series of blog posts diving deeper into each of the lifecycle stages introduced above.
References and Acknowledgements
- Reliable Machine Learning: Applying SRE Principles to ML in Production, by Cathy Chen, et.al., O'Reilly Media, Inc.
- Designing Machine Learning Systems, by C. Huyen, O'Reilly Media, Inc.
- Who Needs MLOps: What Data Scientists Seek to Accomplish and How Can MLOps Help?, by Sasu Mäkinen, et.al.
- DevOps: A software architect's perspective, by L. Bass, et.al.
- Practical MLOps, by Gift, Noah and Deza, Alfredo, O'Reilly Media, Inc.
- MLOps - Definitions, Tools and Challenges, by G. Symeonidis, et.al.
- An intelligent DevOps Platform research and design based on Machine Learning, by Dhaya Sindhu Battina
- Towards MLOps: A Case Study of ML Pipeline Platform, by Yue Zhou, et.al.
- Hidden Technical Debt in Machine Learning Systems, by D. Sculley, et.al.
- Postmortem Culture: Learning from Failure, by John Lunney and Sue Lueder
- Image by studio4rt on Freepik