Skip to main content

AI • IOT • NETWORK EDGE

5 Steps for Machine Learning and Predictive Maintenance

Machine learning is crucial to the next industrial revolution. As equipment and supply chains join the Industrial IoT (IIoT), the flood of data can overwhelm already-busy human supervisors—creating an urgent need for self-regulating automation.

Consider a power plant. Each generating facility contains a complex, interdependent ecosystem of equipment and infrastructure. Collectively, these systems generate enormous amounts of data—up to 1.5TB per day. These staggering volumes of data exceed human capabilities but fit neatly into the wheelhouse of machine learning.

Think of machine learning as a force multiplier. Machine learning can take over routine operations, freeing up valuable human capital for new analytical tasks. Instead of relying on conventional maintenance schedules, for example, machine learning can monitor equipment and predict maintenance needs in real time.

On a larger scale, consider the sensors distributed across a supervisory control and data acquisition (SCADA) system. A big data system can gather and analyze all of these variables, enabling creation of a model that predicts system-wide behavior. The ability to leverage bulk sensor data in this fashion has huge implications for machine downtime and maintenance costs.

Applications for Machine Learning

Let's return to our power plant example. Power station failures can cascade across systems, causing overloading and widespread blackouts. The Northeast blackout of 2003 was caused in part by system monitoring failures that cascaded across the network until 256 power plants were knocked offline.

To ensure resilience and avoid outcomes like this, both the station and the grid it connects to must adhere to critical infrastructure protection guidelines. The ability to perform predictive maintenance can be critical to this process.

By deploying cameras, vibration sensors, flow rate sensors, and other devices, predictive maintenance systems can monitor equipment in real time. The appearance of hot spots, or subtle changes in vibration patterns, could mean a pump or motor will soon need to be replaced.

There’s even the possibility of detecting problems in equipment that can’t be easily monitored. For example, careful monitoring of input flow rate, fluid temperature, or cavitation in downstream equipment can indicate trouble in an unmonitored piece of upstream equipment.

Developing Systems That Learn

Predictive maintenance and other machine learning algorithms are built in a five-step process illustrated in Figure 1. First, sensor data is collected and sanitized to extract features of interest. Next, the developer selects a learning model and trains it with the collected data. Finally, the model is validated against new real-world data and deployed into the field. If a model fails to exhibit the desired behavior, that failure is fed back into the model to improve its performance in the future.

Figure 1: Machine learning development involves five main stages. (Source: NI)

This approach is reflected in the architecture of the LabVIEW Machine Learning Toolkit from National Instruments. This toolkit supports a wide range of algorithms, protocols, and programs (virtual instruments, or VIs) that can be used to train machine learning models, to discover structures hidden in large volumes of data. Let's use the toolkit to explore the five phases of model development.

1. Data Collection

The obvious place to start is by collecting data from field systems. As you would expect, LabView offers a robust suite of data acquisition tools, to train models on instance-specific data.

But it is not always practical to gather field data—for example, if the field systems have not yet been instrumented. For these scenarios, the Machine Learning Toolkit can also work with publicly available data sets. (NASA maintains a set of machine learning data sets that can be used to train models.)

To make use of data once acquired, it must be processed and sanitized to detect anomalies and decrease error and noise.

The specific type of noise a machine learning system needs to recognize and discard will vary depending on the equipment being monitored and what the sensor is designed to record. For example, a sensor designed to monitor vibration of a specific component could pick up vibration from another source. Or a camera might suffer a high level of image noise under certain lighting conditions.

Methods for dealing with noise vary, depending on the type of data being recorded, from simple discarding of outlier data to sophisticated methods of analysis. Machine learning algorithms deal with this problem in different ways, depending on the nature of the task and the type of noise encountered.

2. Feature Extraction

Feature reduction and data smoothing are also needed to organize data for downstream use. For example, in a system with a large number of variables, it is often necessary to perform feature extraction to identify relevant data points.

In the case of LabView, there are a number of tools to normalize data and bring out patterns. Dimensional reduction, anomaly detection, clustering, and classification algorithms work together to produce organized data that is ready to be fed to learning systems.

Drawing on domain expertise, the tools for data sanitation include statistically driven cluster validity indices:

  • Rand Index
  • Davies-Bouldin (DB) Index
  • Jaccard Index
  • Dunn Index

Also, there are tools for evaluation of data classification:

  • Classification Accuracy
  • Confusion Matrix

3. Model Creation

Machine learning systems get their education in two main ways: supervised and unsupervised learning. There are two key distinctions between supervised and unsupervised learning. One is whether or not a human is involved in training the model. The other is what kind of data the model can work with.

Supervised learning models start with data points with predefined attributes. The models attempt to predict relationships between these data points. Erroneous predictions are fed back into the model to improve its performance in the future. Supervised learning models in the Machine Learning Toolkit include:

  • k-Nearest Neighbors (k-NN)
  • Back-propagation (BP) Neural Network
  • Learning Vector Quantization (LVQ)
  • Support Vector Machine (SVM)

Unsupervised learning works with unlabeled data, so there is no error signal fed back into the data stream. This mode excels at identifying spikes, patterns, and clusters of similar and dissimilar data. Unsupervised learning models in the toolkit include:

  • Isometric Feature Mapping (Isomap)
  • Locally Linear Embedding (LLE)
  • Multidimensional Scaling (MDS)
  • Principal Component Analysis (PCA)
  • Kernel PCA
  • Linear Discriminant Analysis (LDA)

4. Validation

The Toolkit's multi-step workflow can be iterated through arbitrarily complex test cases to construct and validate learning models, even those using the end-user-facing "front panel" feature of LabVIEW.

For systems with large data streams, validation requires tooling to compare expected and actual behavior. Here the toolkit includes functions for validation and visualization of bulk data:

  • Visualization (2D & 3D)
  • Plot SOM (2D & 3D)

5. Deployment

Deployment requires more than a good model—it also requires hardware capable of handling computation. For example, LabView models can be deployed on NI's CompactRIO lineup featuring dual- and quad-core Intel Atom® processors.

After deployment, the Machine Learning Toolkit supports real-time condition monitoring at the edge, in addition to managing the overhead of predictive system maintenance. Through every step of this process, the Toolkit supports a variety of communication protocols. It's also compatible with Windows 7+ and Linux, meaning that users can monitor and parse new data as it comes in.

Machine Learning at the Edge

The strength of machine learning is in its ability to rapidly handle vast quantities of data. In a high-stakes, high-throughput environment, the success of a model depends on its information uptake. This means many sensors are required for effective learning.

With a rich sensor profile, machine learning can shorten the time between an adverse event and actionable insight for maintenance schedules, power station operators, and dozens of other industry use cases. It can also decrease data overhead by performing inline network diagnostics and analysis, which reduce the need to move vast quantities of data to and from the cloud for processing and dissemination.

In light of these strengths, models informed by the precision of machine learning and backed by responsive, connected assets are well positioned to help engineers and domain experts turn vast quantities of disorganized data into actionable information, right at the edge.

About the Author

Kenton Williston is an Editorial Consultant to insight.tech and previously served as the Editor-in-Chief of the publication as well as the editor of its predecessor publication, the Embedded Innovator magazine. Kenton received his B.S. in Electrical Engineering in 2000 and has been writing about embedded computing and IoT ever since.

Profile Photo of Kenton Williston