Neural Network Force Fields for Molecular Dynamics Simulations: A Comprehensive Review
Published:
In recent years, there has been a surge in research in classical Molecular Dynamics and force-field parameterization using advanced machine learning like Neural Networks. Since this has been the topic of my PhD work, I wanted to explore the field and try to summirize recent advancing in this field. For this I wanted to test Gemini Research feature. Thus, this post is written by Gemini, and the results are very interesting.
It obviously favoured some more ‘online present’ articles and papers, but it definitely explored more things than I could in 20 mins (that’s how long it took to process). The writing and formatting is obviously not scientific, but it’s very useful for first insights. Let’s see
1. Introduction: The Evolving Landscape of Molecular Dynamics and Force Fields
Molecular dynamics (MD) simulations stand as a powerful and widely employed computational technique, offering the ability to scrutinize the intricate nature of chemical processes at the atomic level, particularly within the realm of proteins (1). These simulations provide a dynamic view of molecular systems, enabling researchers to investigate phenomena ranging from protein folding and unfolding to drug-ligand interactions and material properties. The accuracy of the insights gleaned from MD simulations is fundamentally contingent upon the quality of the force fields employed (1). Force fields serve as the mathematical representation of the potential energy landscape that governs the interactions between atoms within the simulated system. A precise and reliable force field is therefore paramount for the credibility and utility of MD simulations, as it dictates the forces acting on each atom and consequently the overall behavior of the system.
Traditionally, molecular mechanical (MM) force fields have been the mainstay for MD simulations, primarily due to their computational efficiency (1). These force fields utilize simplified analytical functions with parameters derived from experimental data or high-level quantum mechanical calculations. While MM force fields allow for the simulation of large systems over extended periods, they often entail a compromise in accuracy, particularly when dealing with complex chemical environments or phenomena involving electronic polarization and charge transfer (1). In contrast, ab initio, or quantum mechanical (QM), methods offer a more fundamental and accurate description of interatomic interactions by explicitly considering the electronic structure of the system (1). However, the high computational cost associated with QM calculations renders them impractical for simulating large systems or long-timescale processes, thereby limiting their applicability to many problems of biological and material science relevance.
The advent of machine learning (ML), and in particular neural networks (NNs), has introduced a promising paradigm for bridging the gap between the accuracy of QM methods and the computational efficiency of MM force fields (3). Machine learning force fields (MLFFs) leverage the ability of neural networks to learn complex, non-linear relationships from vast amounts of data. By training on high-accuracy QM data, MLFFs can potentially achieve a level of accuracy approaching that of QM methods while maintaining a computational cost closer to that of classical MM force fields. This capability holds the promise of enabling more accurate and efficient simulations of a wider range of molecular systems and processes. The article of primary interest, though inaccessible via the provided link (5), is indicated by surrounding information to likely focus on “Molecular Dynamics Neural Networks force-field parameterization” (1). While a direct analysis of this specific article is not feasible, the wealth of related research snippets provides a substantial foundation for a comprehensive discussion of the current state and future directions of neural network force fields in molecular dynamics simulations.
2. The Rise of Neural Network Force Fields: A Paradigm Shift in Molecular Simulations
Machine learning force fields (MLFFs) represent a significant departure from traditional approaches to molecular simulations by employing neural networks to learn the intricate relationships governing atomic interactions directly from high-fidelity quantum mechanical data (3). This data-driven strategy allows MLFFs to move beyond the limitations inherent in the fixed functional forms and empirically derived parameters that characterize conventional force fields. Instead, neural networks can discern complex patterns and dependencies within the QM data, enabling them to potentially achieve a level of accuracy that rivals QM methods while offering computational efficiency more akin to that of classical MM force fields (3).
A key advantage of MLFFs lies in their adaptability to diverse chemical environments and their capacity to capture intricate chemical behaviors with remarkable precision (3). Unlike traditional force fields, whose parameters are often fitted to specific chemical groups or atom types, MLFFs can learn continuous representations of atomic environments, allowing them to handle a broader range of chemical species and interactions with greater fidelity. This flexibility is particularly crucial for studying complex systems where traditional force fields may struggle to accurately represent all the underlying interactions. The emergence of MLFFs signifies a fundamental shift in the methodology of molecular simulations, moving from a reliance on manually parameterized potentials towards the use of algorithmic learning from data. This transition necessitates the integration of expertise from both computational chemistry and machine learning, opening up new avenues for automating and refining the development of force fields for a wide array of applications.
3. Methodological Innovations in Neural Network Force-Field Parameterization: Diverse Approaches to Enhance Accuracy and Efficiency
The field of neural network force-field parameterization is characterized by a diverse array of methodological innovations, reflecting the ongoing efforts to enhance the accuracy, efficiency, and generalizability of these methods. These innovations can be broadly categorized into hybrid approaches, the utilization of graph neural networks, and the application of active learning and parameter optimization techniques.
3.1. Hybrid Approaches
One prominent direction involves the development of hybrid force fields that combine the strengths of neural networks with more traditional functional forms. The SO3LR method exemplifies this approach by integrating the SO3krates neural network, which excels at capturing semi-local interactions, with universal pairwise force fields designed to model long-range electrostatic and dispersion interactions (4). This combination seeks to address the inherent challenge of accurately representing both short-range quantum mechanical effects and long-range physical forces, leveraging the strengths of each component. The design of SO3LR suggests a modular strategy where different aspects of the potential energy landscape are handled by specialized models, potentially leading to improved accuracy and interpretability.
Another notable hybrid approach involves combining an analytically polarizable force field, such as ARROW, with a short-range neural network correction for the total intermolecular interaction energy (9). Polarizability, the ability of a molecule’s electron cloud to distort in response to an electric field, is a crucial factor in accurately modeling intermolecular interactions, particularly in condensed phases and systems with significant electrostatic contributions. By using a neural network to correct the shortcomings of the analytical force field at short range, this method aims to improve overall accuracy while still benefiting from the physical interpretability and efficiency of the polarizable force field for longer-range interactions. This strategy highlights the potential of using neural networks to refine and augment existing force field methodologies, targeting specific areas where traditional forms may fall short.
Furthermore, the NNP/MM hybrid method represents a practical approach for tackling large biomolecular systems (2). This method involves modeling a critical region of the system, such as the binding site of a protein-ligand complex, using a more accurate but computationally intensive neural network potential (NNP), while the remainder of the system is treated with a traditional, computationally efficient molecular mechanics (MM) force field. This strategy allows researchers to focus computational resources on the regions of greatest interest, where high accuracy is paramount, while still enabling simulations of the entire system at a manageable cost. The NNP/MM approach demonstrates a pragmatic way to apply the power of neural network potentials to complex biological problems where a full NNP simulation might be prohibitively expensive.
3.2. Graph Neural Networks (GNNs)
Graph neural networks have emerged as a powerful tool for learning representations of molecular structures and predicting their properties, including force fields. Grappa stands out as a GNN architecture specifically designed to predict MM force field parameters directly from the molecular graph (7). This approach bypasses the traditional need for hand-crafted chemical features and demonstrates that neural networks can learn to parameterize established MM force field functional forms with enhanced accuracy. Instead of directly predicting energies and forces, Grappa learns to assign parameters such as bond lengths, angles, and dihedral angles, which are then used within the standard MM energy function. The success of Grappa suggests that machine learning can be effectively employed to systematically improve the accuracy of existing force field formalisms, potentially leading to more reliable simulations across a broad range of chemical space.
CHARMM-NN represents another significant application of neural networks in force field development, focusing specifically on proteins (1). This method constructs general and transferable neural network force fields for proteins by training on smaller fragments derived from the residue-based systematic molecular fragmentation (rSMF) method. Proteins, with their complex three-dimensional structures and diverse interactions, often require specialized force fields to accurately capture their behavior. Training on smaller, representative fragments allows CHARMM-NN to learn the fundamental interactions within proteins and then transfer this knowledge to larger and more diverse protein systems, potentially improving the accuracy of protein simulations. The use of fragmentation strategies, a common technique in computational chemistry for handling large molecules, highlights a logical adaptation for training neural network force fields in this domain.
Furthermore, the work of Kristof T. Schütt and colleagues has led to the development of several influential deep learning architectures, including SchNet, PaiNN, and SpookyNet, specifically tailored for modeling atomistic systems and predicting potential energy surfaces and energy-conserving force fields (12). These architectures are designed to inherently respect the fundamental physical symmetries of molecular systems, such as rotational and translational invariance, ensuring that the predicted energies and forces are physically meaningful. SchNet, for instance, utilizes continuous-filter convolutional layers to learn chemically plausible embeddings of atom types across the periodic table. PaiNN introduces polarizable atom interaction neural networks, improving performance on common molecule benchmarks while reducing model size and inference time. SpookyNet explicitly treats electronic degrees of freedom, addressing a critical aspect often overlooked in other models. The development of these specialized NN architectures underscores the ongoing research into identifying the most effective ways to represent and learn the complex nature of atomic interactions.
3.3. Active Learning and Parameter Optimization
Beyond specific network architectures, methodologies for the efficient training and optimization of neural network force fields are crucial. The iterative optimization of force field parameters for materials like silica (SiO2) using active learning and Gaussian process regression provides a compelling example (6). In this approach, the goal is to optimize a set of force field parameters, such as partial charges and van der Waals parameters, by minimizing a cost function that quantifies the difference between properties computed by molecular dynamics simulations using these parameters and reference data obtained from experiments or high-level QM calculations. Active learning strategies are employed to intelligently select the most informative simulations to run, thereby efficiently exploring the parameter space and refining the force field. In the case of silica, the focus on matching the pair distribution function, a key structural property, highlights the importance of accurately capturing the microscopic organization of materials. This methodology demonstrates a systematic way to develop and improve force fields by iteratively comparing simulations with reference data and using machine learning to guide the optimization process.
4. Applications in Molecular Simulations: Expanding the Scope of Computational Investigations
Neural network force fields are finding increasing applications across a broad spectrum of molecular simulations, demonstrating their growing utility in tackling complex scientific problems.
For small molecules, MLFFs like Grappa have shown remarkable accuracy in predicting energies and forces (7). This enhanced accuracy enables more reliable simulations of chemical reactions, the calculation of precise molecular properties, and a deeper understanding of intermolecular interactions involving small molecules, which are fundamental in various fields, including drug design and materials chemistry.
The applicability of MLFFs extends to larger biomolecules such as peptides and RNA. Grappa’s strong performance on these systems suggests its potential for studying their conformational dynamics and intricate interactions with improved accuracy (7). This is crucial for gaining insights into the functional mechanisms of these essential biological macromolecules.
In the realm of proteins, CHARMM-NN aims to enhance the accuracy of MD simulations, which are indispensable for understanding a vast array of biological processes (1). Furthermore, hybrid NNP/MM methods are proving valuable for simulating protein-ligand complexes, allowing researchers to focus computational power on the critical binding site to achieve higher accuracy in predicting binding affinities and understanding the molecular basis of drug action (2).
The ability of MLFFs to handle complex environments is demonstrated by the successful application of SO3LR in simulating biomolecular systems in explicit solvent (4). These simulations, encompassing units of major biomolecule types and extending to nanosecond timescales, are essential for capturing the crucial role of the solvent environment in biological processes.
In materials science, active learning methods are being utilized to optimize force field parameters for materials like silica glass, with a focus on accurately reproducing structural properties (6). This allows for more reliable atomic-level investigations of material behavior, which is vital for the design and discovery of novel materials with tailored properties.
Notably, Grappa exhibits extensibility to radicals, chemical species that are often challenging to model accurately with traditional MM force fields (7). This capability broadens the scope of MLFFs to include reactive systems and opens new avenues for studying reaction mechanisms involving these important intermediates.
The diverse applications highlighted across these research efforts underscore the significant potential of neural network force fields to impact a wide range of scientific disciplines, from advancing our understanding of fundamental biological processes to accelerating the development of new drugs and materials. The ability to perform accurate and efficient simulations across different types of molecules and systems positions MLFFs as a transformative tool in computational science.
5. Challenges, Limitations, and the Quest for Improvement: Navigating the Path Towards Robust and Generalizable MLFFs
Despite the considerable progress in the development and application of neural network force fields, several challenges and limitations remain that necessitate ongoing research and innovation.
A fundamental requirement for training effective MLFFs is the availability of high-quality training data (3). These datasets typically consist of a large number of accurate quantum mechanical calculations that capture the relevant chemical space and interactions (4). Generating such data can be computationally expensive and may represent a bottleneck in the development of new MLFFs. The performance of an MLFF is intrinsically linked to the quality and representativeness of the data it is trained on, emphasizing the need for careful data selection and generation strategies.
While MLFFs generally offer a significant speedup compared to QM methods, their computational cost can still be higher than that of traditional MM force fields, especially for very large systems (2). Achieving a true balance between accuracy and efficiency remains a key area of focus. The complexity of the neural network architecture and the number of parameters can influence the computational cost of evaluating the force field, prompting research into more efficient network designs and inference methods.
Accurately modeling long-range interactions, such as electrostatics and van der Waals forces, presents another challenge for some neural network architectures (2). While neural networks excel at learning local interactions, capturing the nuances of forces acting between distant atoms can be more difficult. Hybrid approaches, like the SO3LR method, which explicitly incorporate terms for long-range interactions, represent a strategy to address this limitation (4).
Ensuring the transferability and generalizability of MLFFs is crucial for their widespread applicability (1). A truly useful force field should be capable of accurately predicting the behavior of systems not explicitly included in the training data. Overfitting to the training set can lead to poor performance on unseen systems, highlighting the need for robust training methodologies and validation strategies.
Improving data efficiency, reducing the amount of training data required to achieve a desired level of accuracy, is an important area of ongoing research (3). Strategies such as active learning, which intelligently selects the most informative data points for training, can help to minimize the data requirements and associated computational costs (6).
Developing robust methods for validation and assessing the reliability of MLFFs across diverse chemical environments is essential for building confidence in their predictions (3). Standardized benchmark datasets and rigorous testing protocols are needed to systematically evaluate the performance of different MLFFs and identify their strengths and weaknesses.
Many current MLFFs do not explicitly account for electronic effects, such as polarization and charge transfer, which can be significant in certain systems (9). Hybrid models that combine neural networks with polarizable force fields offer a potential avenue for incorporating these effects. Explicitly modeling electronic degrees of freedom within a purely neural network framework remains a challenging but important area for future development.
These challenges and limitations underscore the fact that the field of MLFFs is still in an active phase of development. While significant strides have been made, continued research and innovation are necessary to overcome these hurdles and realize the full potential of neural network force fields for molecular simulations.
6. Software Toolkits for Machine-Learned Potential Energy Surfaces: Democratizing Access to Advanced Simulation Capabilities
The development of user-friendly software toolkits is playing a crucial role in making the power of machine-learned potential energy surfaces (ML-PES) accessible to a broader community of researchers. These toolkits streamline the often complex process of building, training, and applying ML-PES models.
Asparagus is a Python package specifically designed for the autonomous and user-guided construction of ML-PES (21). It provides a comprehensive workflow that encompasses initial data sampling, interfaces to ab initio calculation programs, ML model training, as well as model evaluation and its application within other simulation codes such as ASE and CHARMM. By integrating these various steps into a coherent implementation, Asparagus aims to lower the barrier for researchers to utilize ML-PES in their work and improve the reproducibility of results. The modular framework of Asparagus is designed to facilitate the incorporation of new ML-related methods and models, ensuring that users have access to state-of-the-art techniques.
SchNetPack is another versatile neural networks toolbox that caters to both the development and application of atomistic machine learning models (14). It includes fundamental building blocks for atomistic neural networks, manages their training, and provides straightforward access to common benchmark datasets. Notably, SchNetPack features a PyTorch implementation of molecular dynamics and supports equivariant neural networks, which are crucial for accurately modeling the symmetries inherent in molecular systems. This toolbox serves as a valuable platform for researchers looking to develop novel ML models or apply existing ones to a wide range of problems in computational chemistry and materials science.
The Neural Force Field (NFF) code provides an API built upon several popular neural network architectures, including SchNet, DimeNet, PaiNN, DANN, CHGNet, and MACE (31). It offers a unified interface for training and evaluating neural networks for force fields and can also be used as a property predictor that incorporates both 3D geometry and 2D graph information. NFF also supports the use of neural network ensembles for uncertainty quantification and adversarial sampling of geometries. By consolidating access to multiple state-of-the-art NN models, NFF simplifies the process of experimenting with different architectures and selecting the most suitable one for a given application.
Furthermore, OpenMM-ML facilitates the integration of pre-trained ANI (Accurate NeurAl networK engINe for Molecular Energies) potentials into the widely used OpenMM molecular dynamics simulation package (32). This allows researchers who are already familiar with OpenMM to easily leverage the accuracy of ANI neural network potentials in their MD simulations, broadening the accessibility of these advanced force fields to the wider molecular simulation community. The interoperability between different software packages is essential for promoting the adoption and application of new methodologies.
These software toolkits play a critical role in democratizing access to the advanced capabilities of machine-learned potential energy surfaces. By providing user-friendly interfaces, comprehensive workflows, and support for various ML models and simulation packages, they empower a broader range of researchers to harness the power of MLFFs in their scientific investigations.
7. Future Perspectives and Research Directions: Charting the Course for the Next Generation of Molecular Simulations
The field of neural network force fields is rapidly evolving, with numerous promising avenues for future research and development.
One key direction involves the development of more accurate and efficient network architectures (7). Researchers are continuously exploring novel neural network designs that can better capture the complex nature of atomic interactions with improved accuracy and reduced computational cost. Innovations in deep learning, such as the development of equivariant neural networks that inherently respect physical symmetries, are expected to play a significant role in this area.
Further progress is anticipated in improving the handling of long-range interactions and electronic effects (2). Future research will likely focus on developing more sophisticated methods to accurately model electrostatic and dispersion forces, as well as incorporating electronic polarization and charge transfer into MLFFs. This could involve the development of new network architectures or the integration of neural networks with more traditional approaches that explicitly treat these effects.
Enhancing the transferability and generalizability of MLFFs remains a crucial goal (34). Efforts will continue to focus on developing force fields that can be reliably applied to a wider range of chemical space and different thermodynamic conditions. This may involve training on more diverse datasets, exploring techniques like domain adaptation and meta-learning, or incorporating more fundamental physical principles into the design of the neural networks.
The development of more efficient training methods is also a significant area of focus (6). Researchers are exploring strategies such as active learning, transfer learning, and the use of physics-informed neural networks to reduce the amount of high-quality training data required to achieve a desired level of accuracy. This will make the development of MLFFs more feasible and accessible.
The integration of experimental data for refinement presents another exciting avenue for future research (38). Incorporating experimental measurements, such as thermodynamic properties or spectroscopic data, into the training or validation process could further improve the accuracy and reliability of MLFFs by providing real-world constraints on the models.
Extending the capabilities of MLFFs to accurately model reactive systems involving bond breaking and formation is a significant future direction (19). This would open up new possibilities for studying chemical transformations with high accuracy, a capability that is challenging for many current force fields.
Finally, as the accuracy and efficiency of MLFFs continue to improve, they are expected to be applied to increasingly complex systems, such as large biomolecular assemblies, materials interfaces, and condensed-phase systems (2). This will require further advancements in both the force fields themselves and the simulation algorithms used to apply them.
The ongoing research and innovation in the field of neural network force fields suggest a future where these methods play an increasingly central role in molecular simulations, revolutionizing our ability to understand and predict the behavior of molecules and materials at the atomic level.
8. Conclusion: The Transformative Potential of Neural Network Force Fields in Molecular Simulations
Neural network force fields represent a significant leap forward in the field of molecular simulations, offering a compelling synergy between the accuracy of quantum mechanical methods and the efficiency of classical force fields (3). The diverse array of methodologies and architectures being actively developed, including innovative hybrid approaches and sophisticated graph neural networks, underscores the intense research activity and innovation characterizing this domain. The successful application of MLFFs to a wide spectrum of systems, ranging from small molecules and peptides to complex biomolecular assemblies and materials, highlights their versatility and substantial potential.
While challenges and limitations such as the need for high-quality training data, computational cost, and the accurate handling of long-range interactions persist, ongoing research is diligently addressing these issues, paving the way for the development of more robust and generalizable MLFFs in the future. The creation of user-friendly software toolkits like Asparagus and SchNetPack is playing a vital role in democratizing access to these advanced methods, thereby accelerating their adoption and application across the broader scientific community.
The continued progress in this dynamic field promises to fundamentally transform our ability to simulate and understand the behavior of molecules and materials at the atomic level. This transformative potential holds the key to unlocking significant breakthroughs in various scientific disciplines, including the acceleration of drug discovery, the design of novel materials with tailored properties, and a deeper understanding of fundamental chemical and physical processes. The integration of machine learning into molecular simulations is not merely an incremental improvement but rather a fundamental shift that is reshaping how we approach computational modeling in the molecular sciences, ushering in a new era of discovery and innovation.
Publications
- Toward a general neural network force field for protein simulations: Refining the intramolecular interaction in protein - PMC, accessed on March 21, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10481389/
- NNP/MM: Accelerating molecular dynamics simulations with machine learning potentials and molecular mechanics - PMC, accessed on March 21, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10577237/
- Machine Learning Force Fields: Transforming Molecular Dynamics Simulations - Quantistry, accessed on March 21, 2025, https://www.quantistry.com/blog/machine-learning-force-fields-molecular-dynamics-simulations
- Molecular Simulations with a Pretrained Neural Network and Universal Pairwise Force Fields - ChemRxiv, accessed on March 21, 2025, https://chemrxiv.org/engage/api-gateway/chemrxiv/assets/orp/resource/item/6704263051558a15ef6478b6/original/molecular-simulations-with-a-pretrained-neural-network-and-universal-pairwise-force-fields.pdf
- accessed on January 1, 1970, https://www.sciencedirect.com/science/article/pii/S0010465524003692
- Interatomic forcefield parameterization by active learning - YouTube, accessed on March 21, 2025, https://www.youtube.com/watch?v=jsR_Jh1Ue58
- Grappa - A Machine Learned Molecular Mechanics Force Field - arXiv, accessed on March 21, 2025, https://arxiv.org/html/2404.00050v1
- An overview about neural networks potentials in molecular dynamics simulation, accessed on March 21, 2025, https://www.researchgate.net/publication/381106253_An_overview_about_neural_networks_potentials_in_molecular_dynamics_simulation
- Combining Force Fields and Neural Networks for an Accurate Representation of Chemically Diverse Molecular Interactions - ACS Publications, accessed on March 21, 2025, https://pubs.acs.org/doi/10.1021/jacs.3c07628
- Combining Force Fields and Neural Networks for an Accurate Representation of Chemically Diverse Molecular Interactions - PMC, accessed on March 21, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10623557/
- Grappa – a machine learned molecular mechanics force field - RSC Publishing, accessed on March 21, 2025, https://pubs.rsc.org/en/content/articlehtml/2025/sc/d4sc05465b
Kristof T. Schütt Semantic Scholar, accessed on March 21, 2025, https://www.semanticscholar.org/author/Kristof-T.-Sch%C3%BCtt/33075217 Kristof T. Schütt Technical University of Berlin 54 Publications 3266 Citations - SciSpace, accessed on March 21, 2025, https://scispace.com/authors/kristof-t-schutt-9k4090wah2 Kristof T. Schütt’s research works Laboratory for Research on Learning and Development and other places - ResearchGate, accessed on March 21, 2025, https://www.researchgate.net/scientific-contributions/Kristof-T-Schuett-2196878769 Kristof T Schütt Technische Universität Berlin TUB · Department of Software Engineering and Theoretical Computer Science - ResearchGate, accessed on March 21, 2025, https://www.researchgate.net/profile/Kristof-Schuett Kristof T. Schütt Papers With Code, accessed on March 21, 2025, https://paperswithcode.com/author/kristof-t-schutt Michael Gastegger Papers With Code, accessed on March 21, 2025, https://paperswithcode.com/author/michael-gastegger - Machine Learning Force Fields - PMC - PubMed Central, accessed on March 21, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC8391964/
- Crash testing machine learning force fields for molecules, materials, and interfaces: molecular dynamics in the TEA challenge 2023 - Chemical Science (RSC Publishing) DOI:10.1039/D4SC06530A, accessed on March 21, 2025, https://pubs.rsc.org/en/content/articlehtml/2025/sc/d4sc06530a
- About machine learning potential : r/comp_chem - Reddit, accessed on March 21, 2025, https://www.reddit.com/r/comp_chem/comments/1d27v1g/about_machine_learning_potential/
- Asparagus Documentation — Asparagus Bundle 0.4 documentation, accessed on March 21, 2025, https://asparagus-bundle.readthedocs.io/
- Asparagus: A Toolkit for Autonomous, User-Guided Construction of Machine-Learned Potential Energy Surfaces - arXiv, accessed on March 21, 2025, https://arxiv.org/html/2407.15175v1
- Asparagus: A toolkit for autonomous, user-guided construction of machine-learned potential energy surfaces - Mendeley Data, accessed on March 21, 2025, https://data.mendeley.com/datasets/9w9xw7mp2h
- Markus Meuwly - dblp, accessed on March 21, 2025, https://dblp.org/pid/38/5007
- Asparagus: A toolkit for autonomous, user-guided construction of machine-learned potential energy surfaces - OUCI, accessed on March 21, 2025, https://ouci.dntb.gov.ua/en/works/4YqR2nq4/
[PDF] MLatom 3: A Platform for Machine Learning-Enhanced Computational Chemistry Simulations and Workflows Semantic Scholar, accessed on March 21, 2025, https://www.semanticscholar.org/paper/93060354cc8809dd87c6afe0a404c86f3e500554 Neural Network Potential Energy Surfaces for Small Molecules and Reactions Chemical Reviews - ACS Publications, accessed on March 21, 2025, https://pubs.acs.org/doi/abs/10.1021/acs.chemrev.0c00665 - MLatom 3: A Platform for Machine Learning-Enhanced Computational Chemistry Simulations and Workflows - ACS Publications, accessed on March 21, 2025, https://pubs.acs.org/doi/abs/10.1021/acs.jctc.3c01203
- keyword:”Physical Chemistry” - Science Explorer Search, accessed on March 21, 2025, https://www.scixplorer.org/search?q=keyword%3A%22Physical+Chemistry%22&sort=date+desc&p=1
- (PDF) : A Toolkit for
- Autonomous, User-Guided Construction of …, accessed on March 21, 2025, https://www.researchgate.net/publication/386055529_A_Toolkit_for_Autonomous_User-Guided_Construction_of_Machine-Learned_Potential_Energy_Surfaces
- learningmatter-mit/NeuralForceField: Neural Network Force Field based on PyTorch - GitHub, accessed on March 21, 2025, https://github.com/learningmatter-mit/NeuralForceField
- An Introduction to Neural Network Potentials - Rowan, accessed on March 21, 2025, https://www.rowansci.com/publications/introduction-to-nnps
- Machine Learning of Coarse-Grained Molecular Dynamics Force Fields - PMC, accessed on March 21, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC6535777/
- Generalizability of Graph Neural Network Force Fields for Predicting Solid-State Properties, accessed on March 21, 2025, https://arxiv.org/html/2409.09931v1
- Data-driven parametrization of molecular mechanics force fields for expansive chemical space coverage - RSC Publishing, accessed on March 21, 2025, https://pubs.rsc.org/en/content/articlehtml/2025/sc/d4sc06640e
- Data-Driven Parametrization of Molecular Mechanics Force Fields for Expansive Chemical Space Coverage - arXiv, accessed on March 21, 2025, https://arxiv.org/html/2408.12817v1
Adrian Roitberg Research University of Florida, accessed on March 21, 2025, https://scholars.ufl.edu/roitberg/grants - Fine-tuning molecular mechanics force fields to experimental free energy measurements, accessed on March 21, 2025, https://www.biorxiv.org/content/10.1101/2025.01.06.631610v1.full-text
- Challenges, limitations, and impact of machine learning (ML) in… - ResearchGate, accessed on March 21, 2025, https://www.researchgate.net/figure/Challenges-limitations-and-impact-of-machine-learning-ML-in-molecular-dynamics-MD_fig12_379410038