Artificial intelligence (AI) is the future of manufacturing. Many of the issues that continue to plague the industry – too expensive, too much scrap, unplanned downtimes, lack of personnel for both specialized and routine tasks – can be addressed with this new powerful technology. AI has come a long way over the last few years and robust solutions for AI implementations are available to improve important tasks like quality control and equipment maintenance.
In our business, we have seen everything from visionary customers who have embraced AI for an ever-increasing number of applications to those who are reluctant to adopt it for a variety of reasons.
After years of focusing exclusively on how to add value to manufacturers with AI-based solutions, we know that AI can live up to its promises and that manufacturers will need to adopt AI to continue to be competitive. We also know what it takes to be successful and what common challenges all companies are facing.
Understanding these challenges and addressing them from the get-go is the key to a successful AI implementation. In this blog, let’s take a look at common data challenges related to AI implementations and how to address them.
Data Challenge 1: Confusion! Which Data Do We Need?
Figuring out which data are needed to train the AI models can be everything from obvious to unknown. Generally, it is pretty clear which data needs to be collected for quality control tasks, especially for visual inspection. If I want to know whether a label has a tear or a canister is dented, images of the product will provide the answer.
It gets more complicated for predictive maintenance applications. Here we have to try and figure out what impacts the performance of a specific piece of equipment and the answer doesn’t always meet the eye. The factors could be related to usage such as the blade of a stamp getting dull, environmental factors such as ambient temperature, or the characteristics of the materials used to make products.
In these cases, two approaches help to figure out what is important and what is not. First, experienced operators often have a good hunch of what causes problems. If somebody often experiences equipment failures on hot days in a non-air conditioned plant, then they can tell you that ambient temperature is likely to play a role. Systematically collecting this tribal knowledge is a critical early step in AI implementation.
Secondly, unless difficult or expensive, “measure more now and figure out what’s important with AI” is a good approach. AI models are experts at finding patterns and explainable AI can pinpoint what input factors impact the performance of a piece of equipment. Once you know what’s important you can focus on measuring that.
Read more about explainable AI here: How to Trust AI
Data Challenge 2: Unavailable Data
Even if you know what data you need, e.g. images of a battery after the slurry is deposited, it doesn’t mean that those data are available. In our experience, the required data are generally not available. The reason for this is simple: there was no need to collect that data prior to the AI implementation. If a battery was previously not inspected after the slurry was added why take a picture?
Unavailable can also mean that data exists but is inaccessible, e.g. because the equipment manufacturer considers it proprietary and prevents you from getting to it.
We have also seen default settings on equipment which led to the collected data being automatically discarded after a few days because of data storage constraints. Unfortunately, this was only detected after several weeks when the customer wanted to retrieve the data and start training the AI models.
Unavailable data is a fact of life and something every manufacturer switching to AI will likely have to deal with sooner rather than later. In the case of quality control missing data can often be generated relatively quickly, especially if the product is manufactured at high volume. The situation is different for predictive maintenance, where it can take many months to generate enough data for the models to learn when a breakdown (a relatively rare occurrence, else you wouldn’t have bought that equipment in the first place) is likely to happen.
A special case that complicates the development of predictive maintenance models is missing or incomplete repair logs. The more detailed the information about the defect type and the repairs undertaken is, and the further back it goes, the faster a predictive model can be developed. This requires cross-functional collaboration between manufacturing and maintenance, which, in our experience, might need some encouragement to happen reliably.
The best way of addressing data availability challenges is an awareness of all the possible issues, a well-considered plan to generate the data, the willingness to act quickly to mitigate issues, e.g. by adding sensors, storage capacity or negotiation with your equipment supplier to get access, and a commitment to cross-functional collaboration.
Data Challenge 3: No Strategy for Collecting, Curating and Archiving Data
Once you have figured out which data to collect and start collecting it a new problem arises: how to deal with the onslaught of data. This is particularly relevant for quality control of high-volume consumer products. If you make 100 million bottles of wine, frozen pizzas, or batteries a year and take pictures at one more several stages of production, images accumulate very quickly.
Where is that data stored, does the data have to be cleaned, and if so how, is the data archived, and if so, where?
These issues aren’t unsolvable but need to be addressed upfront rather than as an afterthought.
This issue ties in with the next challenge: to deal with all the data, you need a cloud strategy.
Data Challenge 4: Missing Cloud Strategy
Manufacturers everywhere have now adopted the cloud after some initial resistance. In the context of AI implementations, the cloud has both: very tangible advantages as well as disadvantages.
Let’s look at the upsides first: you can set up a cloud instance in little time. Using your credit card, you are up and running and collecting data in about 30 minutes. This is a great option for when you first get started with AI e.g., for a proof-of-concept study, the cloud is the perfect solution.
Once you are collecting millions of data points the cloud gets very expensive very quickly. We are talking hundreds of thousands or even millions of dollars per year. At this point on premise solutions are significantly cheaper. We have seen data-heavy use cases where the cost of two or three months’ worth of cloud fees was enough to buy all the computing equipment needed to last for years.
Planning Ahead for a Successful AI Implementation
Planning ahead, deciding on which equipment to buy and when to switch from the cloud to on-premise computing is crucial to minimize the cost of AI solutions.
Implementing AI in manufacturing is not without challenges. Understanding what those challenges are and planning ahead to mitigate them is what differentiates a smooth from a bumpy implementation.
We can help you with your planning and AI implementation roadmap to ensure it is a smooth ride. We are just one click away.
Further Reading
A Good Way to Getting Started: AI for Predictive Maintenance