Researchers use Artificial Intelligence to better predict properties of materials, optimise the number of synthesis or develop materials faster. Thanks to AI, the development of analysis techniques, and the advancement of digital platforms and computing power, the materials are becoming more and more adaptable and…smarter.
Think about self-healing concrete, scratch-resistant glasses, self-cleaning windows or performance-enhancing textiles. In this context, a topic that is becoming of interest in materials science research is the use of AI for image analysis.
What is image analysis, and why is it essential in materials research?
Image analysis is obtaining significant information from images. In materials research, this can be performed using image segmentation, where the features of interest are partitioned into multiple regions for easier and more meaningful analysis.
Most of the images that need to be analysed are acquired using different imaging techniques:
- optical microscopy,
- electron microscopy
- or micro-computed tomography.
Imaging techniques play an essential role in developing smart materials, better understanding their performance in real applications and optimising the manufacturing process.
From looking at different phases in alloys to analysing failures of components and observing different shapes, researchers rely on images to enter the world of unseen microstructures.
While images are a good starting point in any materials analysis, extracting quantitative information is usually the step that takes the innovation of materials to the next level.
How easy is it to go from images to data? Well, it is easy enough with the right tools.
While algorithms for image segmentation have been around for a while, these are mainly based on greyscale threshold, a solution that cannot solve challenges like segmenting regions with different textures, coloured images, or features that are distinct under different contrast modes.
In addition, traditional algorithms require high-quality images, which entails time-consuming and expensive sample preparation techniques. Given the limitations of traditional algorithms, automated and unbiased image segmentation is still a challenge for many materials scientists.
ZEISS ZEN Intellesis  is a machine learning trainable image segmentation software that overcomes the challenges of traditional segmentation techniques. It reduces the operator bias, it is automated, and it is easy to use by non-image analysis experts.
ZEISS ZEN Intellesis has a user-friendly interface and is compatible with multi-modal images (CZI, TIFF, JPG, PNG, TXM and all Bio-Format compatible images) from many microscopy sources (widefield, super-resolution, fluorescence, confocal, light sheet, X-ray and electron microscopy). In just a few clicks, the user loads the image, defines the different classes or regions of interest and trains the model that is used to perform the image segmentation.
The trained model can then be reapplied to a stack of images for automatic analysis. Integrated workflows can be used in several materials research applications such as phase fraction analysis in duplex stainless steels, automatic assessment of layer thickness, segmentation of 3D foam glass, mining mineralogy grains or grain size determination of metals and ceramics . An example of segmentation of mineral grains is shown in Figure 1 .
One of the many applications of ZEISS ZEN Intellesis is determining the size distribution of particles . Used in industrial applications such as coatings, pigments, energy materials, pharma, and chemicals, particle research is essential to bring to market some of the most innovative materials. These include smart coatings used for scratch-resistant glasses and self-cleaning windows.
For this, particle size characterization is critical to check both their performance and quality. Before becoming a product, these powders are analyzed to ensure that the shape, size distribution and surface area, among other parameters, fit the requirements and are consistent throughout the batch.
In addition, the powders are checked for chemical composition and possible defects to ensure premium quality. Bulk analytical techniques such as laser scattering or sieving are often used to determine particle size distribution. While these methods have been successfully used for a while, they are limited by the composition and size of the powders. What is more, the automated analysis of individual powders in agglomerates is still very challenging.
Advanced image segmentation algorithms that use machine learning can help speed up this analysis and improve the consistency and accuracy of the results, avoiding human operator bias. Unlike traditional image segmentation algorithms that struggle to differentiate between the boundaries and individual powders, ZEISS ZEN Intellesis can intelligently be trained to segment individual powders.
Figure 2 shows an example of particles from the sparks of ferrocerium collected on a silicon substrate that have been segmented using ZEISS ZEN Intellesis. The machine learning model was trained to successfully identify the background, boundaries between the individual powders and the powders.
Once the ZEISS ZEN Intellesis model is created and trained, it can easily be integrated into an end-to-end automated workflow to produce personalized reports and to perform quantitative analysis, such as particle area distribution .
The presented analysis has numerous applications in materials research, such as additive manufacturing powders, composites or ceramics, and provides a thorough understanding of both material properties and processes.
2D analysis is instrumental to better understand the structure, processes, properties and performance of materials. However, when it comes to porous materials, filters, building materials or composites, a 3D perspective is crucial.
3D image segmentation is essential to observe the evolution of materials in real-time, quantify the special distribution of particles or inclusions, and study existing defects. ZEISS ZEN Intellesis can also segment 3D data sets, in a similar way, by applying the trained model to the 3D stack of images .
ZEISS ZEN Intellesis enhances research, increases the accuracy of data and improves productivity in both academic and industrial environments.
The power of deep learning in materials
Similarly to ZEISS ZEN Intellesis, Materials Image Processing and Automated Recognition (MIPAR)  offers 2D and 3D image analysis tools in a single package. Using deep learning AI and a powerful image analysis engine, MIPAR allows users to perform a fast, accurate and automated analysis of images.
MIPAR is app-suite based, and the individual apps can communicate with each other, offering users a unique set of toolboxes . Image processor, batch processor, a real-time processor, post-processor, 3D toolbox and deep learning trainer are the apps that allow users to create a tailored analysis of the images while still benefiting from a user-friendly interface.
The image processor is where the image processing steps, called the recipe, is created. The batch processor applies the recipe to a set of images at the same time. Similar to the batch processor, the real-time processor applies the recipe to images from a particular folder. Once the analysis is performed, the results can be checked and adjusted in the post-processor app. The 3D toolbox allows users to visualize the 3D data.
The deep learning app is designed to trace the features of interest from a set of images that are used to train a model. In three simple steps: trace, train, and apply, researchers can create a model that identifies the features of interest and run the model on new images to detect complex features.
Depending on the features of interest, the model can be trained on tens of images just in a matter of minutes and can be run on new images in just a few seconds, which further advance the innovation of new materials with unique properties.
Challenges such as grain size measurement, phase identification, and particle and defect analyses can now be solved automatically. In addition, personalized reports (Figure 3) can be generated in order to easily share the data .
Using the power of deep learning, complex analysis such as twinned grains (Figure 4), additive manufacturing feature identification, or nanofiber analysis (Figure 5), can now be performed automatically. A similar analysis can be run to enhance the performance of textiles.
MIPAR brings researchers personalized solutions to problems that challenged the community for decades in just a few steps.
Data makes you a winner
Image analysis has become more and more essential to materials researchers in industrial environments who want to extract meaningful information from their data or automate an analysis routine.
ZEISS ZEN Intellesis and MIPAR are tools that are easy to use, accurate and effective to perform such a task. With no image segmentation expertise, researchers can now use the power of AI to bring innovation in materials development.
With the fast advancement of automation, what will be the next smart material developed using AI?
 https://www.zeiss.com/intellesis . ZEISS. [Cited 2020 14 May];
 https://blogs.zeiss.com/microscopy/en/deep-learning-for-image-segmentation-in-microscopy/. ZEISS. [Cited 2020 14 May];
 Stratulat, A., Andrew, M., Bhattiprolu, S,. Nanoparticles Research Accelerated by Digital Solutions Platform. Imaging and Microscopy, 2018.20:3: p.16-17;
 Barnett, R, Stratulat A., Andrew, M. Advanced Segmentation for Industrial Materials using Machine Learning. https://www.zeiss.com/intellesis. ZEISS. [Cited 2020 14 May];
 Andrew, M, Homberger B. Benchmarking of Machine Learning and Conventional Image Segmentation Techniques on 3D X-Ray Microscopy Data. Proceedings of the 14th International Conference on X-ray Microscopy, 2018.24:S2:p.118-119;
 http://www.mipar.us. MIPAR. [Cited 2020 14 May];
 Sosa, JM, Huber, DE, Welk, BA, Fraser, HL . MIPAR™: 2D and 3D Image Analysis Software Designed by Materials Scientists, for All Scientists. Microscopy and Microanalysis, 2017. 23:S1:p.230-231;
*This article is the work of the guest author shown above. The guest author is solely responsible for the accuracy and the legality of their content. The content of the article and the views expressed therein are solely those of this author and do not reflect the views of Matmatch or of any present or past employers, academic institutions, professional societies, or organizations the author is currently or was previously affiliated with.