Skip to Content

Innovation at speed: What Intelligent Industry can learn from Formula One’s data driven innovation

Ashish Padhi
9 May 2022

In the second of our “Intelligent Industry: Journey to Farnborough International Airshow” blog series, Ashish Padhi delves into the data driven rapid innovation process of Formula One aerodynamic design to prise out lessons for Intelligent Industry.

How to make the fastest car even faster?

In Formula One, when competing for wins at the front end of the grid, every 100th of a second matters. Under these extreme conditions, squeezing performance out of the car is what Formula One teams are best known for. But how do they do this? One approach is data-driven rapid innovation.

100th of a second is a championship

In summer 2016, there was an intense battle for the Formula One drivers and constructors’ championship. The pressure to find performance was very palpable. Behind the scenes, away from the glamour of the track, I was part of the aerodynamics team relentlessly pushing to find performance anywhere we could. I discovered through simulations that the engine intake duct, a critical piece of half-a-meter long carbon-fibre tube, which looked like a squashed saxophone, had an air-flow separation problem. To fix it, I needed to change the saxophone’s shape. My only problem, there was no room for the duct to move or change.

So, what did my team and I do?

We built a new intake duct to fit into the same tight space, in 6 weeks flat, that added a chunk of power to the engine and shaved three hundredths of a second off lap-time. To put that into perspective, Lewis Hamilton lost pole in Japan 2016 by thirteen-thousandths of a second that equated to 82 centimetres on track.

This is one of countless data-driven innovation stories in Formula One racing – faster, better and more thrilling – which Intelligent Industry can learn from and embed in its core. But what makes this seemingly impossible possible? Innovation at speed needs highly connected process, data, tools, and culture at its core. Let’s unpick the story to see what makes this rapid innovation possible.

Design anywhere: Central databases provide rapid access

Firstly, intake duct, which originated in the engine team could make its way to me with all relevant data from previous design iterations, and without ambiguity or duplication, because of a shared design database. Also, whether I was at home or in the office, I could access the duct and run flow simulations on it using another shared simulation software.

Fail fast: Multiple layers of simulation speed up optimisation

Secondly, the simulations. I could quickly create several versions of the duct in minutes, while ensuring zero clash with any of the dozen or so parts packed around the duct, thanks to a parametric Computer Aided Design (CAD) model and a shape sculpting software. These shapes were then tested for their effectiveness in solving the flow separation problem using a Computational Fluid Dynamics (CFD) model that simulated airflow within the duct with complex mathematical models. Pre-defined data-analysis programs and visualisation tools then helped me narrow down the options from twenty to just two. Second and third layer of simulation involved an engine simulation software and an engine test-rig. All these generated multiple sets of data which needed to be combined and compared to narrow down the options, and this was done in a data dashboard.

Dashboard: Integrated visualisation accelerates decisions

A single dashboard, to combine and compare large data sets, can make decision making quick and effortless. Critically, in our case, data from all three simulation sources (viz. CFD, engine-simulation and test-rig) could be rapidly analysed at once because of pre-defined shared data taxonomy and a single visualisation tool. Too much data could overwhelm, and delay decision making without the right tools to automatically gather and analyse that information.

No Artificial Intelligence (AI)?

Now, you could ask why we did not use an optimisation tool, AI or Machine Learning. The answer is, they would have needed a lot of time and effort to set-up (data-cleansing, parametrising) and tune (optimisation and learning parameters) for problems like this, namely, one-off with too many constraints and not enough historic data.

Finally, we must not forget what made the process fast in addition to the data and tools.

Radical ownership: data needs culture for faster results

Without clear ownership, a process like the one described above can lead to confusion regarding responsibilities, unexpected delays and ultimately failure. In a radical ownership culture, leaders become champions of the projects, who are there to remove hurdles and marshal resources. They hand over decision making to individuals doing the designing, manufacturing, and testing. This, however, necessitates a high level of trust among individuals involved. It also requires psychological safety, which means people involved are not afraid of making a ‘wrong’ decision and they are not afraid of speaking up and sharing views confidently. Such a radical ownership culture can ensure that a faster and cheaper data-led innovation approach works every time, no matter the nature or scale of change involved.

So, how do I build such a core, I hear you ask?

To summarise, a data-driven rapid innovation approach can be developed in any organisation by simultaneously improving –

a. Data sources:
i. structured and central design databases,
ii. multiple modes or layers of simulation (virtual, prototype, test-rigs, etc.)

b. Data visualisation and analysis tools:
i. integrated visualisation portal
ii. pre-defined algorithms to combine and compare different data sources

c. Culture:
i. empowered people who take ownership (avoid micro-management, don’t penalise failures and create trust).
ii. permission and time to ‘play’, so individuals/teams can iterate, learn and optimise faster.

Moreover, it is worth bearing in mind that innovation takes time, in multiple senses of the word. Firstly, a rapid innovation process needs time to settle-in. Secondly, people need free time to observe, question and think. With time and in time, the right mix of data and culture can make the seemingly impossible, possible!

And what is yet to come?

So, what’s next? We are already moving from ‘automation of activity and simple logic’ to ‘automation of decision making (learning based i.e., decision logic emerges from learning)’. Prescriptive analytics, few shot learning and genetic algorithms are all tools which could help us automate complex decisions at scale soon. We could be well prepared for this next step if we recognise now that innovation at speed not only needs data and tools, but also the right culture.

To read more blogs in the Intelligent Industry: Journey to Farnborough International Airshow series, see quick links below:

A Quantum of Intelligent Industry – Mike Dwyer considers the potential impact that the world of quantum computing, sensing and communication could have on our ability to create new intelligent products and services.

Author