Machine Learning Accelerates Excited-State Dynamics

Machine learning paves the way for massive simulations of nonadiabatic excited-state molecular dynamics.

Excited-state processes are of outermost importance for many vital processes both in nature and technology including photosynthesis and photovoltaics applications. Investigating these processes using theoretical simulations is therefore indispensable for better understanding and controlling them. Performing such simulations is however very costly. Additional complications arise from the necessity to take into account nonadiabatic transitions.

Luckily, we live in the age of surging machine learning (ML). As I have shown previously, we can now reduce months of quantum mechanical calculations by seconds of ML calculations for computing rovibrational spectra. Thus, in collaboration with Mario Barbatti, we have investigated how ML can be used to drastically cut the cost of nonadiabatic excited-state simulation and results of this study have been published in J. Phys. Chem. Lett.[1]

First of all, we have demonstrated that it is possible in principle to construct such a complete ML model that it knows everything about the system necessary to perform nonadiabatic excited-state dynamics with decoherence-corrected fewest switches surface hopping (DC-FSSH). The demonstration has been done using adiabatic spin-boson Hamiltonian (A-SBH) model. As you can see on a Figure above, the trajectory run with complete ML is identical with the trajectory run with A-SBH including reproduction of all hopping events.

Next, we have moved to address realistic, high-dimensional systems. For this, A-SBH models have proved to be extremely useful, because of their flexibility, generality, and low computational cost. It has allowed us to run thousands of trajectories and to perform a systematic study unbiased towards selection of a quantum mechanical method or a target molecule. Additionally, A-SBH models feature strong cupling between dimensions, which has made them even more challenging than typical atomistic systems.

In brief, we propose to sample relatively few points from high-dimensional space using procedure similar to our structure-based sampling. Based on these points, we generate approximate adiabatic ML potentials. These potentials with proper treatment of nonadiabatic transitions can be used in DC-FSSH dynamics to reproduce well such observables as excited-state lifetimes. The number of training points for ML is much lower than expected number of costly quantum mechanical calculations for typical simulation. Thus, thousands of trajectories can be run with ML, which widens the scope of problems, which can be solved theoretically, and allows for improved precision of nonadiabatic excited-state dynamics simulations.

For performing dynamics I have interfaced Newton-X program widely used for excited-state molecular dynamics with my own program MLatom for performing atomistic simulations with ML. We are working with Mario Barbatti, who is a major developer of Newton-X, on preparing our implementations for release to make them freely available to the community.

1. Pavlo O. Dral, Mario Barbatti, Walter Thiel, Nonadiabatic Excited-State Dynamics with Machine Learning. J. Phys. Chem. Lett. 2018, 9, 5660–5663. DOI: 10.1021/acs.jpclett.8b02469.

2 Comments on “Machine Learning Accelerates Excited-State Dynamics

  1. Hi Pavlo, nice paper. But the question I am wondering about is: Do you think it is ever possible to make conical intersections actually conical this way? In other words, will the PESs actually have a discontinuous derivative at the intersection and will the two PESs only touch at one point?

    -Felix

    • Hi Felix, thank you for the question. ML can only give predictions based on the training points – you will feed it with some finite numbers including gradients and energies and ML will predict you finite number. For realistic systems you will always have only approximate ML potentials and they can touch only at one point practically only if you include the conical intersection in the training set – and even then you do not have 100% guarantee. In our case, our complete ML model should be numerically almost exactly equal to A-SBH and two PESs should touch at one point. For 33-D model, we have switched on A-SBH calculations well in advance before CI.

Leave a Reply to Felix Plasser Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.