Guest Author

Artificial Intelligence for Smart Materials – an Image Analysis Approach

Artificial Intelligence for Smart Materials - an Image Analysis Approach

Researchers use Artificial Intelligence to better predict properties of materials, optimise the number of synthesis or develop materials faster. Thanks to AI, the development of analysis techniques, and the advancement of digital platforms and computing power, the materials are becoming more and more adaptable and…smarter.

Think about self-healing concrete, scratch-resistant glasses, self-cleaning windows or performance-enhancing textiles. In this context, a topic that is becoming of interest in materials science research is the use of AI for image analysis.

What is image analysis, and why is it essential in materials research?

Image analysis is obtaining significant information from images. In materials research, this can be performed using image segmentation, where the features of interest are partitioned into multiple regions for easier and more meaningful analysis.

Most of the images that need to be analysed are acquired using different imaging techniques:

  • optical microscopy,
  • electron microscopy
  • or micro-computed tomography.
Artificial Intelligence in Material Science

Imaging techniques play an essential role in developing smart materials, better understanding their performance in real applications and optimising the manufacturing process.

From looking at different phases in alloys to analysing failures of components and observing different shapes, researchers rely on images to enter the world of unseen microstructures.

While images are a good starting point in any materials analysis, extracting quantitative information is usually the step that takes the innovation of materials to the next level.

How easy is it to go from images to data? Well, it is easy enough with the right tools.

The challenge

While algorithms for image segmentation have been around for a while, these are mainly based on greyscale threshold, a solution that cannot solve challenges like segmenting regions with different textures, coloured images, or features that are distinct under different contrast modes.

artificial intelligence for smart materials

In addition, traditional algorithms require high-quality images, which entails time-consuming and expensive sample preparation techniques. Given the limitations of traditional algorithms, automated and unbiased image segmentation is still a challenge for many materials scientists.

Subscribe to Matmatch Blog

From images to data in a few minutes

ZEISS ZEN Intellesis [1] is a machine learning trainable image segmentation software that overcomes the challenges of traditional segmentation techniques. It reduces the operator bias, it is automated, and it is easy to use by non-image analysis experts.

ZEISS ZEN Intellesis has a user-friendly interface and is compatible with multi-modal images (CZI, TIFF, JPG, PNG, TXM and all Bio-Format compatible images) from many microscopy sources (widefield, super-resolution, fluorescence, confocal, light sheet, X-ray and electron microscopy). In just a few clicks, the user loads the image, defines the different classes or regions of interest and trains the model that is used to perform the image segmentation.

The trained model can then be reapplied to a stack of images for automatic analysis. Integrated workflows can be used in several materials research applications such as phase fraction analysis in duplex stainless steels, automatic assessment of layer thickness, segmentation of 3D foam glass, mining mineralogy grains or grain size determination of metals and ceramics [1]. An example of segmentation of mineral grains is shown in Figure 1 [2].

Figure 1: Low contrast mining mineralogy grains, imaged using reflected light microscopy.
Figure 1: Low contrast mining mineralogy grains, imaged using reflected light microscopy. Left: Classified with machine learning ZEISS ZEN Intellesis [2]; Right: Unclassified. Image courtesy of ZEISS Research Microscopy Solutions.

One of the many applications of ZEISS ZEN Intellesis is determining the size distribution of particles [3]. Used in industrial applications such as coatings, pigments, energy materials, pharma, and chemicals, particle research is essential to bring to market some of the most innovative materials. These include smart coatings used for scratch-resistant glasses and self-cleaning windows.

For this, particle size characterization is critical to check both their performance and quality. Before becoming a product, these powders are analyzed to ensure that the shape, size distribution and surface area, among other parameters, fit the requirements and are consistent throughout the batch.

In addition, the powders are checked for chemical composition and possible defects to ensure premium quality. Bulk analytical techniques such as laser scattering or sieving are often used to determine particle size distribution. While these methods have been successfully used for a while, they are limited by the composition and size of the powders. What is more, the automated analysis of individual powders in agglomerates is still very challenging.

Artificial Intelligence in Material Science

Advanced image segmentation algorithms that use machine learning can help speed up this analysis and improve the consistency and accuracy of the results, avoiding human operator bias. Unlike traditional image segmentation algorithms that struggle to differentiate between the boundaries and individual powders, ZEISS ZEN Intellesis can intelligently be trained to segment individual powders.

Figure 2 shows an example of particles from the sparks of ferrocerium collected on a silicon substrate that have been segmented using ZEISS ZEN Intellesis. The machine learning model was trained to successfully identify the background, boundaries between the individual powders and the powders.

Once the ZEISS ZEN Intellesis model is created and trained, it can easily be integrated into an end-to-end automated workflow to produce personalized reports and to perform quantitative analysis, such as particle area distribution [3].

Figure 2 Workflow of nanoparticle size distribution analysis using ZEISS ZEN Intellesis.
Figure 2: Workflow of nanoparticle size distribution analysis using ZEISS ZEN Intellesis. (A) Original Scanning Electron Microscope (SEM) image of nanoparticles. (B) Segmented image using Intellesis showing background (blue), boundaries between particles (green) and nanoparticles (red). (C) Image of separated individual nanoparticles using machine learning and further analysis. (D) Particle area distribution of individually segmented nanoparticles [4]. Image courtesy of ZEISS Research Microscopy Solutions.

The presented analysis has numerous applications in materials research, such as additive manufacturing powders, composites or ceramics, and provides a thorough understanding of both material properties and processes.

2D analysis is instrumental to better understand the structure, processes, properties and performance of materials. However, when it comes to porous materials, filters, building materials or composites, a 3D perspective is crucial.

3D image segmentation is essential to observe the evolution of materials in real-time, quantify the special distribution of particles or inclusions, and study existing defects. ZEISS ZEN Intellesis can also segment 3D data sets, in a similar way, by applying the trained model to the 3D stack of images [5].

Subscribe to Matmatch blog

ZEISS ZEN Intellesis enhances research, increases the accuracy of data and improves productivity in both academic and industrial environments.

The power of deep learning in materials

Similarly to ZEISS ZEN Intellesis, Materials Image Processing and Automated Recognition (MIPAR) [6] offers 2D and 3D image analysis tools in a single package. Using deep learning AI and a powerful image analysis engine, MIPAR allows users to perform a fast, accurate and automated analysis of images.

Subscribe to Matmatch Blog

MIPAR is app-suite based, and the individual apps can communicate with each other, offering users a unique set of toolboxes [7]. Image processor, batch processor, a real-time processor, post-processor, 3D toolbox and deep learning trainer are the apps that allow users to create a tailored analysis of the images while still benefiting from a user-friendly interface.

The image processor is where the image processing steps, called the recipe, is created. The batch processor applies the recipe to a set of images at the same time. Similar to the batch processor, the real-time processor applies the recipe to images from a particular folder. Once the analysis is performed, the results can be checked and adjusted in the post-processor app. The 3D toolbox allows users to visualize the 3D data.

The deep learning app is designed to trace the features of interest from a set of images that are used to train a model. In three simple steps: trace, train, and apply, researchers can create a model that identifies the features of interest and run the model on new images to detect complex features.

Depending on the features of interest, the model can be trained on tens of images just in a matter of minutes and can be run on new images in just a few seconds, which further advance the innovation of new materials with unique properties.

Challenges such as grain size measurement, phase identification, and particle and defect analyses can now be solved automatically. In addition, personalized reports (Figure 3) can be generated in order to easily share the data [6].

Figure 3: Example of the report generated for copper alloy grain size measurement using MIPAR [6].

Using the power of deep learning, complex analysis such as twinned grains (Figure 4), additive manufacturing feature identification, or nanofiber analysis (Figure 5), can now be performed automatically. A similar analysis can be run to enhance the performance of textiles.

Figure 4 Detecting grains while ignoring twins using MIPAR
Figure 4: Detecting grains while ignoring twins using MIPAR. The model was trained on 25 images in 40 minutes and applied to the new image in 2 seconds [6].
Figure 5 Overlapping nanofiber network analysis performed using MIPAR.
Figure 5: Overlapping nanofiber network analysis performed using MIPAR. The model was trained on 36 images in 40 minutes and applied to the new image in 1.5 seconds [6].

MIPAR brings researchers personalized solutions to problems that challenged the community for decades in just a few steps.

Data makes you a winner

Image analysis has become more and more essential to materials researchers in industrial environments who want to extract meaningful information from their data or automate an analysis routine.

ZEISS ZEN Intellesis and MIPAR are tools that are easy to use, accurate and effective to perform such a task. With no image segmentation expertise, researchers can now use the power of AI to bring innovation in materials development.

With the fast advancement of automation, what will be the next smart material developed using AI?

"I have always believed materials science will be the next sensation...Fascinating developments are under way from synthesizing new materials that often resemble the nature around us, to observing unbelievable structures at the atomic scale and developing products that improve our life. I am excited to highlight the use of AI and the importance of multidisciplinary innovation in the field of materials science through the Matmatch platform."
Subscribe to Matmatch blog

References:

[1] https://www.zeiss.com/intellesis . ZEISS. [Cited 2020 14 May];
[2] https://blogs.zeiss.com/microscopy/en/deep-learning-for-image-segmentation-in-microscopy/. ZEISS. [Cited 2020 14 May];
[3] Stratulat, A., Andrew, M., Bhattiprolu, S,. Nanoparticles Research Accelerated by Digital Solutions Platform. Imaging and Microscopy, 2018.20:3: p.16-17;
[4] Barnett, R, Stratulat A., Andrew, M. Advanced Segmentation for Industrial Materials using Machine Learning. https://www.zeiss.com/intellesis. ZEISS. [Cited 2020 14 May];
[5] Andrew, M, Homberger B. Benchmarking of Machine Learning and Conventional Image Segmentation Techniques on 3D X-Ray Microscopy Data. Proceedings of the 14th International Conference on X-ray Microscopy, 2018.24:S2:p.118-119;
[6] http://www.mipar.us. MIPAR. [Cited 2020 14 May];
[7] Sosa, JM, Huber, DE, Welk, BA, Fraser, HL . MIPARâ„¢: 2D and 3D Image Analysis Software Designed by Materials Scientists, for All Scientists. Microscopy and Microanalysis, 2017. 23:S1:p.230-231;

*This article is the work of the guest author shown above. The guest author is solely responsible for the accuracy and the legality of their content. The content of the article and the views expressed therein are solely those of this author and do not reflect the views of Matmatch or of any present or past employers, academic institutions, professional societies, or organizations the author is currently or was previously affiliated with.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.