Skip to Content

What can AI strategists learn from software development history?

Padmashree Shagrithaya
2021-09-15

AI and Machine learning penetration has seen a significant leap forward in the last few years. With the increase in AI adoption, the challenges surrounding the administration of AI models too, have multiplied. Most large organizations, which are not born-digital, have allowed many experiments of AI, across the organization. From my conversations with industry leaders and co-professionals, I see many common challenges such as models built on varied platforms/tools proving to be high on maintenance; Even though some of the models/features could be leveraged across processes, their usage is restricted to the creators; Many models in production, are rendered redundant as they are not performing to the initial goal.

“Models outgrowing the creators!”

“Hurt more than help”

With this backdrop, let’s rewind a bit to the software development history. The important milestones that were key in defining today’s software development maturity were, a) Establishing Common Standards of Coding — building trust in the code b) Establishing an independent Quality Assurance team for testing standards and testing itself c) Defining Agile Methodologies to scale; d) Combining development and IT ops for Operationalization & Maintenance through DevOps. Given the world was indeed going through this unprecedented technology transformation for the first time, it almost took 30 years to achieve this maturity level.

For some of us, who have been through this history, the similarities with AI/ML maturity journey, is palpable. We do not, however, have as much time we had, during the software development boom, phase. AI growth has been rapid and the current COVID situation has necessitated a lightning pace for its maturity and its scale.

Let’s try and zoom in, on some of the key issues staring at the Chief Data Officers today and draw answers from the SDLC history. AI industry has leapfrogged as mentioned earlier. So, the problem is not about establishing the need for it or proving its value. The issue is to create “whole” out of “sum of parts”!

a) Building Trust in the Code:

With access to many open source libraries for building algorithms, developing predictive models to help ease out business decisions, is relatively simple, these days. What is more difficult, is building long-lasting trust in these models, amongst the users, management, customers, and regulators. There are multiple candidates involved in ML models, to bring in the necessary trust. The model is more complex than just a software code in the sense that it is code + data+ algorithm. More often than not, the focus, when building the models is on data and algorithm, while the code itself is not given as much importance, as the need to prove the model is more interesting than the underlying code. To bring back our comparison to the software development experience, in the initial stages, the focus there too, was to prove that the software could meet the functionality requirement well, rather than ensuring that the discipline to build code with high standards is maintained.

Learning # 1: Do not ignore the code in your ML models. This is to ensure that the code doesn’t inherently allow for compromises like security threats or compatibility issues. Ensure documentation of every leg of the model lifecycle.

b) AI Quality Assurance:

Testing AI/ML-based systems can be tricky. Traditional testing techniques do not work on AI/ML-based systems. There are multiple levels of testing that needs to be planned. The core model testing, which is called validation is something that a data scientist is trained to do well. However, the devil could be hiding in the data or can arise when applying the model output to action. Starting from data, understanding, and creating appropriate test strategies to data is key. Things that needs unearthing from data can be that of bias or outliers. Appropriate techniques to detect them and treat them, before building the models will be crucial. It may be a good idea to have human-assisted AI systems to run human validation in parallel to critical ML-based systems to ensure avoidance of any adverse impact.

Learning #2: Testing of AI/ML-based systems are much more complex than rule-based software systems. The testing strategy needs to be as elaborate as possible, taking into account, data, code, and the underlying algorithm.

c) Agile Model Development Methodologies

Agile is about incorporating real-world feedback into iterative development. ML models are built on real-world data already. Therefore, we often hear data scientists mention that AI/ML projects are different and can’t impose concepts like agile onto it. Well, ML models are indeed built on real-world data. However, the question of whether the right data (features) has been included or whether the inadequacies of data have been handled (pre-processing) may not be apparent in one go. Especially if the model accuracy is high — it can be misleading. So, the sooner the system is given in the hands of the user to test, the better it is for creating trust-worthy AI systems. Also, even though the model is at the core of AI systems, the rest of the process steps would still need to be tested for corner conditions. So, involve users early on, in the AI projects and implement agile methodologies to benefit the project.

Learning #3: Agile is not just for software development but also for AI systems with ML models at its core. It will help in identifying any additional pre-processing steps required or to identify corner conditions early on.

d) MLOps for Scale:

Managing multiple models, built by varied teams is a huge challenge. Traditionally, the effort to bring all data from different sources has been handled by IT teams. While the data scientists have been aligned more with the business functions, understanding the impact areas, and building predictive models to improve the process. While in the ideal world the handshake between data scientists and IT teams would be seamless, the reality is that the data scientists, coming from a research background, understand little about the vast technology world and vice versa.

As a result, the models that showed the great value in respective processes fail to generate value for the organization. A classic example of a failed rule of the whole is greater than the sum of parts! To build an enterprise AI strategy, it is important to adopt MLOps. Integrating Data pipelines from IT to implementing CI/CD through the IT ops team, is needed –learning from the introduction of DevOps

Learning #4: Implement MLOps to bring all the pieces together and reap larger organization-wide value from AI projects.

While certainly, AI is maturing at a very fast pace, the pace itself can throw up many different challenges. Some of these challenges may be more complex than what we have seen in the past, but I do believe that many of them, can be pro-actively addressed, with experience. Keen to know if there are, indeed, more learnings from Software Development history, that can be applied to help scale AI in a trusted manner.