Text-to-Microstructure Generation Using Generative Deep Learning.
Xiaoyang ZhengIkumu WatanabeJamie PaikJingjing LiXiaofeng GuoMasanobu NaitoPublished in: Small (Weinheim an der Bergstrasse, Germany) (2024)
Designing novel materials is greatly dependent on understanding the design principles, physical mechanisms, and modeling methods of material microstructures, requiring experienced designers with expertise and several rounds of trial and error. Although recent advances in deep generative networks have enabled the inverse design of material microstructures, most studies involve property-conditional generation and focus on a specific type of structure, resulting in limited generation diversity and poor human-computer interaction. In this study, a pioneering text-to-microstructure deep generative network (Txt2Microstruct-Net) is proposed that enables the generation of 3D material microstructures directly from text prompts without additional optimization procedures. The Txt2Microstruct-Net model is trained on a large microstructure-caption paired dataset that is extensible using the algorithms provided. Moreover, the model is sufficiently flexible to generate different geometric representations, such as voxels and point clouds. The model's performance is also demonstrated in the inverse design of material microstructures and metamaterials. It has promising potential for interactive microstructure design when associated with large language models and could be a user-friendly tool for material design and discovery.