Understanding structure–property relationships has long been a central theme in theoretical studies of physics, chemistry, and materials science. With the rapid development of artificial intelligence and deep learning techniques in scientific computing, data-driven machine learning models have demonstrated remarkable success in efficiently predicting scalar physicochemical properties of atomistic systems, such as energies and charges, thereby greatly expanding the accessible chemical and materials space. However, many properties that ultimately determine the functionality of molecules and materials are not scalar invariants, but response properties—including dipole moments, polarizabilities, dielectric permittivity, and elastic constants—which describe how a system responds to external electric, magnetic, or mechanical perturbations and directly correspond to experimentally measurable quantities in spectroscopy, electromagnetism, and mechanics.

Mathematically, these response properties are tensorial quantities that obey strict constraints of geometric equivariance and fundamental symmetry. These requirements introduce a level of complexity that fundamentally distinguishes tensorial targets from scalar properties and poses significant challenges for machine learning models, especially for approaches based on geometrically invariant representations. In contrast, equivariant graph neural networks (EGNNs) are, in principle, well suited for modeling tensorial properties. Nevertheless, most existing approaches are tailored to specific tensor types and rely heavily on the particular order and symmetry of the target tensor, which limits their generality and scalability. These limitations become especially pronounced for higher-order tensors with complex symmetry constraints. Furthermore, for crystalline systems, tensorial properties must additionally satisfy intrinsic symmetry imposed by the crystal space group. Enabling a model to learn these strict geometric constraints without explicitly hard-coding crystallographic symmetry rules remains an open challenge in geometric deep learning.
.png)
Against this background, the research group led by Prof. Xin Xu developed a general tensor output framework for equivariant graph neural networks, grounded in a fundamental group-theoretical principle: any Cartesian tensor with a given fundamental symmetry corresponds uniquely to a direct sum of irreducible representations of the rotation group. Leveraging this principle, the authors designed a universal output module—referred to as the node-wise self-mix layer—which concentrates symmetry handling at the output stage. This design enables equivariant graph neural networks to predict tensorial properties of arbitrary order and arbitrary fundamental symmetry in an end-to-end manner, and is applicable to atom-wise, molecular, and periodic crystalline systems alike.
By integrating this output module with the previously proposed SE(3)-equivariant architecture XPaiNN, the authors achieved accurate and efficient predictions of a wide range of tensorial response properties, including molecular polarizabilities and hyperpolarizabilities, NMR chemical shielding tensors, dielectric permittivity, Born effective charges, and elastic tensors. Across multiple benchmark data sets, the proposed approach consistently outperforms or matches state-of-the-art methods. Notably, for elastic tensors—fourth-order tensors with stringent symmetry requirements—the results demonstrate that intrinsic crystal symmetry is naturally satisfied without any explicit symmetry encoding. This behavior arises from the symmetry-consistent hidden representations generated by the XPaiNN message-passing network, leading to improved numerical stability and reduced errors in derived isotropic elastic moduli. These features provide a reliable foundation for high-throughput materials screening based on mechanical properties.
Overall, this work establishes a systematic and extensible paradigm for learning tensorial properties within geometric deep learning. From an application perspective, it significantly broadens the scope of machine learning in the design of functional molecules and materials with optical, electrical, and mechanical functionalities. In particular, it offers a powerful and general tool for AI-assisted discovery in anisotropic and nonlinear materials, highlighting the potential of equivariant machine learning to advance scientific understanding and materials innovation.
J. Am. Chem. Soc. 2025, 147, 51, 47044–47056.
https://doi.org/10.1021/jacs.5c12428