Technology research firm Gartner, Inc. has estimated that 85% of artificial intelligence (AI) and machine learning (ML) projects fail to produce a return for the business. The reasons often cited for the high failure rate include poor scope definition, bad training data, organizational inertia, lack of process change, mission creep and insufficient experimentation.
To this list, I would add another reason that I have seen many organizations struggle to achieve value from their AI projects. Companies often have invested heavily in building data science teams to create innovative ML models. However, they have failed to adopt the mindset, team, processes and tools necessary to efficiently and safely put those models into a production environment where they can actually deliver value.
To avoid this trap and achieve greater value from AI, here are four recommendations to help your organization translate your data scientists' amazing algorithms into real business impact.
1. Adopt a software mindset.
ML models are undoubtedly important, but developing ML code is just one part of the AI/ML life cycle. Data collection, feature extraction, data verification, machine resource management and other activities adjacent to the ML code actually consume the bulk of time and resources in the ML life cycle.
To be successful, companies must stop thinking of models as an end on their own. The fact is that a model is just a way to transform data written in the form of a function. In short, the model is just software.
When
software engineers think about putting a model into production, their concerns are around how the model handles errors, whether the model will do what it is expected to do, whether it can respond quickly enough and whether it will integrate effectively into the organization's software stack.
Adopting a software mindset means moving away from an "artisanal" approach of handling every model as a one-off toward an "industrial" approach focused on putting the tools and processes in place to get models into production efficiently and effectively.
2. Build an ML platform team.
Since models are software, companies should look to their software organizations when they think about how to structure the ML operations team that will be responsible for bringing models into production.
Where a software organization has product development teams supported by an applications platform team (along with a core group to manage the infrastructure), the AI/ML organization should have data science teams supported by an ML engineering group—along with a team whose mandate is to assemble, manage and monitor the platform that the data science and ML teams use (i.e., an ML platform team staffed with ML platform engineers).
The ML platform engineer is a crossover role—similar to a DevOps position, plus software since they might need to build APIs or support the development of infrastructure patterns, for example. Awareness of data helps because data is so intertwined with ML. The ML platform engineer role also requires strong soft skills, curiosity and a collaborative mindset since they will work with diverse teams across the ML life cycle.
3. Establish end-to-end processes.
When a company is still in the "artisanal" stage of ML and is working with only a few use cases, it can get by with bespoke processes, treating each model as a one-off. However, as it expands the number of models that it's putting into production, it needs to standardize its processes to ensure a high level of confidence in both the processes themselves and in the models that it's putting into production.
This means establishing processes across the entirety of the model life cycle—which can be challenging because of the diverse teams involved throughout the life cycle. For example, different groups or individuals tend to be involved in promoting models from lab to staging and then to prod. As a result, different processes need to be implemented for each stage.
It's worth saying again that processes need to be established across the entire model life cycle. Yes, handoffs need to be defined all the way from experimentation to production. However, a model's life cycle doesn't end when it goes live in production, and procedures should be vetted for monitoring and retraining models as well.
4. Incorporate an operational platform.
Many companies that are successful with AI/ML invariably have a dedicated platform for operationalizing models for a variety of reasons. First and foremost, the computational workloads that a system supports in experimentation or training are very different from the workloads in the operationalizing phase.
In experimentation, the limiting factor is how quickly you can spin up resources independently so that you can use your Scikit-learn or TensorFlow and so on. When you go into the implementation phase, you care about a completely different set of capabilities. Is the platform resilient and high availability? Does it have hooks into Datadog or New Relic?
That's why even companies that have a training platform should consider incorporating an operational platform. As a rule, the ML platform itself should provide "self-service with guardrails," allowing data scientists to quickly and safely deploy models into production. At a minimum, the tools that a high-functioning ML team requires for managing operational AI workloads at scale should include:
• A training platform.
• An operational AI (or serving) platform.
• A data platform.
• DevOps to orchestrate everything.
• A workflow system, which may or may not include a batch prediction platform.
By adopting a software mindset around ML and putting in place the team, processes and tools to safely and efficiently deploy ML models, companies can significantly reduce the time required to put models into production and see value from their research innovations.
Implementing standard end-to-end processes can also improve model governance and prepare teams for upcoming regulations around AI, such as the EU's AI Act and the American Data Privacy and Protection Act (ADPPA).
Finally, these companies can free up their data scientists to develop even more innovative models to deliver intelligent products and services, ultimately increasing AI's value and impact on the business.
Article resource: https://www.forbes.com/sites/forbestechcouncil/2023/01/17/achieving-next-level-value-from-ai-by-focusing-on-the-operational-side-of-machine-learning/?sh=22fd8f682d7e
Không có nhận xét nào:
Đăng nhận xét
Lưu ý: Chỉ thành viên của blog này mới được đăng nhận xét.