Machine Learning techniques FOR Quantum Gate Engineering
Machine Learning techniques FOR Quantum Gate Engineering
Acronym: ML Q-FORGE
Actors: Department of Physics (Francesco Pederiva), Lawrence Livermore National Lab (Jonathan Dubois and Kyle Wendt), and Simone Taioli (ECT*).
Proposal: to fund a PhD student (€ 66.000)
Period: Started November 2019.
Control-centric quantum computing is based on the idea that arbitrary “gates” acting on a qubit can be realized by applying some external time dependent control signal (that can be considered as the “low-level software”) to a predetermined quantum system (the “hardware”), in such a way that the resulting propagation reproduces the effect of a Hamiltonian of choice. In the case one is interested in simulating time-dependent processes (e.g. nuclear reactions, chemical reactions, ab-initio lattice dynamics in solids, etc.), this might seem a serious application killer. In this project, the researchers’ group intends to study general schemes to accelerate the determination of a control pulse in a control-centric quantum computing scheme to help avoid the exponential increase in computing cost (on classical machines) as a function of the increase of the size of the system. On a more concrete level, the researchers want to find some set of parameters α that encode a unitary matrix U(α) which approximates the time evolution operator U of a targeted Hamiltonian H. The unitary matrix in principle (U) has more degrees of freedom than α, and our objective functions that describes how well our U(α) describes U is relatively expensive to compute. Machine Learning (ML) techniques are well suited to accomplish this goal. The first goal will be to apply supervised learning as a surrogate to speed up the process of finding the optimal α for a given U. The idea is to build a smooth map from specific features of H (e.g. positions of particles or a coupling constant) to regions of the control parameters α. This approach will be certainly applicable to smaller problems, such as the dynamics of two to four nucleons (e.g. scattering of two neutrons, where we only have two features to map, the length and azimuthal angle of the interparticle separation). As the problem size grows, the number of features we need to map will grow rapidly and supervised learning will become infeasible. In the latter case, unsupervised learning tools, such as autoencoders and generative models, can be used to map the exponentially large feature space to a smaller and more manageable synthetic feature space.