2021 Poster Abstracts

1. Estimating distinguishability measures on quantum computers

Rochisha Agarwal, Louisiana State University

Abstract. The performance of a quantum information processing protocol is ultimately judged by distinguishability measures that quantify how distinguishable the actual result of the protocol is from the ideal case. The most prominent distinguishability measures are those based on the fidelity and trace distance, due to their physical interpretations. In this paper, we propose and review several algorithms for estimating distinguishability measures based on trace distance and fidelity, and we evaluate their performance using simulators of quantum computers. The algorithms can be used for distinguishing quantum states, channels, and strategies (the last also known in the literature as "quantum combs"). The fidelity-based algorithms offer novel physical interpretations of these distinguishability measures in terms of the maximum probability with which a single prover (or competing provers) can convince a verifier to accept the outcome of an associated computation. We simulate these algorithms by using a variational approach with parameterized quantum circuits and find that they converge well for the examples that we consider.

2. Multimode Gaussian State Tomography With A Single Photon Counter

Arik Avagyan, National Institute of Standards and Technology, Boulder

Abstract. The continuing improvement in the qualities of photon-number-resolving detectors opens new possibilities for performing state tomography on quantum states of light. In this work we consider the question of what can be learned about an arbitrary multimode Gaussian state by a single photon-number-resolving detector that does not distinguish between modes. We find an answer to this question in the ideal case when the detector does not have a limit on the photon number resolution. In the asymptotic data limit we show that such a setup allows one to learn the eigenspectrum of the covariance matrix of the state and the absolute displacement along each eigenspace, and nothing else. We investigate the operational meaning of these parameter for pure and mixed states. We conjecture that these parameters can be learned from a finite set of photon number expectations, with the size of the set depending on the number of modes of the Gaussian state, which will make this measurement setup practical.

3. An Actively Controlled Dual Species Rb Atomic Magnetometer for Low Frequency Communication.

John Bainbridge, University of New Mexico CQuIC

Abstract. We present a radiofrequency (RF) atomic magnetometer based on natural abundance rubidium vapor for communications through lossy media. By utilizing both 85Rb and 87Rb, we build upon the variometer concept first presented by Alexandrov et al [1] and the atomic RF magnetometers first built at Princeton [2]. We have constructed a variometer using 87Rb to obtain the full external field vector, which allows for active stabilization of the Larmor resonance of 85Rb at the desired frequency via an FPGA-based feedback system. We have also designed a 3D-printed miniaturized housing for our physics package. Here we report on our progress toward a highly sensitive, fieldable magnetometer for low frequency communications in lossy media that can operate outside a magnetic shield. Funding Statement Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. References [1] E.B. Alexandrov et al, Meas. Sci. Technol. 15, 918-922 (2004). [2] I.M. Savukov, S.J. Seltzer, and M.V. Romalis, PRL 95, 063004 (2005)

4. Measuring out-of-time-ordered correlation functions without reversing time evolution

Philip Blocher, University of New Mexico CQuIC

Abstract. In recent years out-of-time-ordered correlation functions (OTOCs) have played a crucial role in characterizing the dynamics of quantum information in quantum many-body systems due to their ability to quantify quantum information scrambling. In particular, the relationship between scrambling, entanglement, thermalization, and quantum chaos has been elucidated using OTOCs [2]. However, due to their out-of-time-ordered nature, OTOCs are difficult to measure experimentally and at first glance require the reversal of time evolution in experiments. In this poster we present a novel OTOC measurement protocol that is easy to implement in a range of experimental and theoretical settings. The measurement protocol circumvents the need for the reversal of time evolution by relating the OTOC to the expectation value of a single-time operator in a simple forward time-evolved state [1]. Thus, only a single instance of the system and no auxiliary degrees of freedom are needed. We demonstrate how the protocol accounts for both pure and mixed initial states and extend it to systems that interact with environmental degrees of freedom. Finally, we highlight the application of our protocol with examples from the characterization of scrambling in a driven spin that exhibits quantum chaos. [1] P.D. Blocher, S. Asaad, V. Mourik, M.I. Johnson, A. Morello, and K. Mølmer, arXiv:2003.03980 (2020). [2] R.J. Lewis-Swan, A. Safavi-Naini, J.J. Bollinger, and A.M. Rey, Nat. Commun. 10, 1581 (2019).

5. Behavior of Analog Quantum Algorithms

Lucas Brady, National Institute of Standards and Technology, Maryland

Abstract. Analog quantum algorithms are formulated in terms of Hamiltonians rather than unitary gates and include quantum adiabatic computing, quantum annealing, and the quantum approximate optimization algorithm (QAOA). These algorithms are promising candidates for near-term quantum applications, but they often require fine tuning via the annealing schedule or variational parameters. In this work, we explore connections between these analog algorithms, as well as limits in which they become approximations of the optimal procedure. Notably, we explore how the optimal procedure approaches a smooth adiabatic procedure but with a superposed oscillatory pattern that can be explained in terms of the interactions between the ground state and first excited state that effect the coherent error cancellation of diabatic transitions. Furthermore, we provide numeric and analytic evidence that QAOA emulates this optimal procedure with the length of each QAOA layer equal to the period of the oscillatory pattern. Additionally, the ratios of the QAOA bangs are determined by the smooth, non-oscillatory part of the optimal procedure. We provide arguments for these phenomena in terms of the product formula expansion of the optimal procedure. With these arguments, we conclude that different analog algorithms can emulate the optimal protocol under different limits and approximations. Finally, we present a new algorithm for better approximating the optimal protocol using the analytic and nu

6. A pulsed gradiometer in Earth's field with direct optical readout

Kaleb Campbell, University of New Mexico CQuIC

Abstract. We describe an atomic gradiometer based on the magnetically sensitive hyperfine coherence in two vapor cells of warm 87Rb atoms. The device provides a direct readout of the gradient field, unlike traditional gradiometers which subtract the outputs of two spatially separated magnetometers. A pulsed microwave field resonant with the hyperfine ground state splitting prepares an atomic coherence and generates sidebands offset from a weak (carrier) beam incident on two vapor cells. The sidebands interfere and an optical beat note is produced, with the frequency of the beat directly proportional to the magnetic field gradient between the two cells. We also describe a theoretical framework and numerical model we developed to understand the sideband generation process and to inform experiments. For a practical gradiometer, it is important to be able to measure the gradient regardless of the direction of the ambient magnetic field, either perpendicular or parallel to the laser beam propagation axis. Operation of the gradiometer in multiple field orientations is discussed as well as single beam operation, where one beam acts as both a pump and carrier. Single beam operation is simpler and more compact and is beneficial for applications such as Magnetoencephalography (MEG), where multiple sensor channels are tightly positioned around the human skull.

7. Barren Plateaus in Quantum Neural Networks

Marco Cerezo, Los Alamos National Laboratory

Abstract. Quantum Neural Networks (QNNs), and Variational Quantum Algorithms (VQAs) have the potential of enabling the first practical applications of quantum machine learning on near-term noisy devices. At their core both QNNs and VQAs train the parameters in a neural networks (or a parametrized quantum circuit) to minimize a cost function which encodes the information of a problem. While many different architectures have been proposed, most of them are heuristic methods with unproven scaling that can guarantee that the optimization can be successful. In fact, one of the few rigorous results which analyze the trainability of the parameters is that the cost landscape can exhibit the so-called barren plateau phenomena, where the cost function gradients vanish exponentially with the system size. In this talk we first discuss the importance of performing rigorous scaling analysis on the trainability of QNNs and VQAs. We then review recent results where we analyze the trainability of two types of QNNs, the first is a parametrized quantum circuit commonly known as a layered hardware efficient ansatz, and the second is a quantum convolutional neural network. For the hardware efficient ansatz, we show that the existence of barren plateaus can be linked to the locality of the cost function. Then, we analyze quantum convolutional neural network and we show that this specific architecture does not exhibit barren plateaus, and hence can be generically trainable.

8. Free fermions behind the disguise

Adrian Chapman, University of Oxford

Abstract. An invaluable method for probing the physics of a quantum many-body spin system is a mapping to noninteracting effective fermions. We find such mappings using the frustration graph G of a Hamiltonian H, i.e., the network of anticommutation relations between the Pauli terms in H in a given basis. Specifically, when G is (even-hole, claw)-free, we construct an explicit free-fermion solution for H using only this structure of G, even when no Jordan-Wigner transformation exists. The solution method is generic in that it applies for any values of the couplings. This mapping generalizes both the classic Lieb-Schultz-Mattis solution of the XY model and an exact solution of a spin chain recently given by Fendley, dubbed "free fermions in disguise." Like Fendley's original example, the free-fermion operators that solve the model are generally highly nonlinear and nonlocal, but can nonetheless be found explicitly using a transfer operator defined in terms of the independent sets of G. The associated single-particle energies are calculated using the roots of the independence polynomial of G, which are guaranteed to be real by a result of Chudnovsky and Seymour. Furthermore, recognizing (even-hole, claw)-free graphs can be done in polynomial time, so recognizing when a spin model is solvable in this way is efficient. We give several example families of solvable models for which no Jordan-Wigner solution exists.

9. Quantum algorithm for time-dependent Hamiltonian simulation by permutation expansion

Yi-Hsiang Chen, University of Southern California

Abstract. We present a quantum algorithm for the dynamical simulation of time-dependent Hamiltonians. Our method involves expanding the interaction-picture Hamiltonian as a sum of generalized permutations, which leads to an integral-free Dyson series of the time-evolution operator. Under this representation, we perform a quantum simulation for the time-evolution operator by means of the linear combination of unitaries technique. We optimize the time steps of the evolution based on the Hamiltonian's dynamical characteristics, leading to a gate count that scales with an L1-norm-like scaling with respect only to the norm of the interaction Hamiltonian, rather than that of the total Hamiltonian. We demonstrate that the cost of the algorithm is independent of the Hamiltonian's frequencies, implying its advantage for systems with highly oscillating components, and for time-decaying systems the cost does not scale with the total evolution time asymptotically. In addition, our algorithm retains the near optimal log(1/ε)/ log log(1/ε) scaling with simulation error ε.

10. Holographic dynamics simulations with a trapped ion quantum computer

Eli Chertkov, Honeywell

Abstract. Quantum computers promise to efficiently simulate quantum dynamics, a classically intractable task central to fields ranging from chemistry to high-energy physics. Quantum processors capable of high-fidelity mid-circuit measurement and qubit reuse enable a holographic technique that uses quantum tensor network states (qTNS), which efficiently compress quantum data, to simulate the evolution of infinitely-long, entangled initial states using a small number of qubits. In this poster, we discuss our recent benchmark of a holographic qTNS technique in a trapped ion quantum processor using 11 qubits to simulate an entangled state evolving under self-dual kicked Ising chain dynamics. We observe hallmarks of quantum chaos and light-cone propagation of correlations, and find excellent quantitative agreement with theoretical predictions for the infinite-size limit with minimal post-processing or error mitigation. These results show that qTNS methods, paired with state-of-the-art quantum processor capabilities, offer a viable route to practical quantum computational advantage on problems of direct interest to science and technology in the near term.

11. Quantifying expressibility of variational quantum circuits

Yash Chitgopekar, Oak Ridge National Laboratory

Abstract. Deep learning methods have become constrained by available computational power, and new platforms are required. As quantum hardware has progressed into the Noisy Intermediate-Scale Quantum (NISQ) era, variational quantum circuits have been proposed as models for feed-forward networks. For them to be successful in supervised learning tasks, they must be capable of representing a variety of quantum states. Shallow circuits are unsuitable for this, yet similar to difficulties with deep neural networks, as the length of a quantum circuit increases, gradient-based training loses its efficacy. To combat this phenomenon, shorter circuit ansatze with greater expressibility must be identified. The expressibility of quantum circuits for quantum state preparation tasks has previously been quantified in terms of coverage of the Bloch sphere. Adapting this analysis for quantum circuits that are trained to prepare classical distributions, we propose a novel method based on Löwner-John ellipsoids that directly quantifies the circuits' ability to explore the probability simplex. We execute noiseless simulations of one, two, and three-qubit circuits of varying depth over randomly sampled parameter sets to compare different circuit topologies. In our studies on circuits with up to three quits and thirty parameters, we observe that increases in expressibility arrive in abrupt, step-like increments. Using this characteristic, we suggest guidelines on circuit design in quantum circuit learning.

12. Unifying state-of-the-art quantum error mitigation techniques

Piotr Czarnik, Los Alamos National Laboratory

Abstract. Achieving near-term quantum advantage will require effective methods for mitigating hardware noise. Many state-of-the-art error mitigation are data-driven, employing classical data obtained from runs of different quantum circuits.  For example, Zero-noise extrapolation (ZNE) uses variable noise data, Clifford-data regression (CDR) uses data from near-Clifford circuits and Virtual Distillation (VD) utilizes data produced from different numbers of state preparations. First, we propose a novel, scalable error mitigation method that conceptually unifies ZNE and CDR. Our approach, called variable-noise Clifford data regression (vnCDR), generates training data first via near-Clifford circuits (which are classically simulable) and second by varying the noise levels in these circuits.  We employ a noise model obtained from IBM's Ourense quantum computer to benchmark our method  and show that it significantly outperforms these individual methods. Next, we generalize vnCDR unifying CDR, ZNE and VD under a general data-driven error mitigation framework that we call UNIfied Technique for Error mitigation with Data (UNITED). We find that for sufficiently large shot resources UNITED outperforms the individual methods and vnCDR. Specifically, we employ a realistic noise model obtained from a trapped ion quantum computer to benchmark UNITED and show that for our largest considered shot budget (10^{10}), UNITED gives the most accurate correction.

13. Provable quantum computational advantage with the cyclic cluster state

Austin Daniel, University of New Mexico CQuIC

Abstract. We propose two Bell-type nonlocal games that can be used to prove quantum computational ad-vantage in an objective and hardware-agnostic manner. In these games, the capability of a quantumcomputer to prepare a cyclic cluster state and measure a subset of its Pauli stabilizers is comparedin terms of circuit depth to classical Boolean circuits with the same gate connectivity. Using acircuit-based trapped-ion quantum computer, we prepare and measure a six-qubit cyclic clusterstate with an overall fidelity of 60.6% and 66.4%, before and after correcting measurement-readouterrors respectively. Our experimental results indicate that while this fidelity readily passes conven-tional (or depth-0) Bell bounds for local hidden-variable models, it is on the cusp of demonstratingquantum advantage against depth-1 classical circuits. Our games offer a practical and scalable setof quantitative benchmarks for quantum computers in the pre-fault-tolerant regime as the numberof qubits available increases.

14. Universal limitations on quantum key distribution over a network

Siddhartha Das, Universite libre de Bruxelles

Abstract. Secure communication is at the heart of hopes for a quantum-based internet. Such a network would use principles of quantum physics to generate secret encryption keys that are, in principle, unbreakable. But many questions remain with regards to security criteria for these systems and the rate at which they can generate secure keys. We introduce the most general form of a quantum network channel, which we call as a quantum multiplex channel. We determine fundamental limitations on secret key distribution over quantum multiplex channels against a quantum eavesdropper while using the most general adaptive strategy. Trusted parties are allowed to perform local operations and classical communication between channel uses. The essential step in our multifaceted project was to show that any entanglement-based protocol used to distribute cryptographic keys among many parties must distill a genuinely multipartite entangled state. Using this finding, we provide bounds on the secret key rates among an arbitrary number of trusted parties over a network. These bounds are in terms of quantities that measure the potential for entanglement generation of quantum multiplex channels. The generic structure of our protocol and bounds allows us to determine limitations on quantum key repeaters and rates of measurement-device-independent quantum key distribution. Furthermore, for certain cases of interest, we determine the maximum rates at which a secret key can be shared among trusted parties.

15. Quantum and classical Bayesian agents

John DeBrota, Tufts University

Abstract. We describe a general approach to modeling rational decision-making agents who adopt either quantum or classical mechanics based on the Quantum Bayesian (QBist) approach to quantum theory. With the additional ingredient of a scheme by which the properties of one agent may influence another, we arrive at a flexible framework for treating multiple interacting quantum and classical Bayesian agents. We present simulations in several settings to illustrate our construction: quantum and classical agents receiving signals from an exogenous source, two interacting classical agents, two interacting quantum agents, and interactions between classical and quantum agents. A consistent treatment of multiple interacting users of quantum theory may allow us to properly interpret existing multi-agent protocols and could suggest new approaches in other areas such as quantum algorithm design.

16. Heralded-multiplexed high-efficiency cascaded source of dual-rail polarization-entangled photon pairs using spontaneous parametric down conversion

Prajit Dhara, University of Arizona

Abstract. Deterministic sources of high-fidelity entangled qubit pairs encoded in the dual-rail photonic basis are a key enabling technology of many applications of quantum information processing, including high-rate high-fidelity quantum communications over long distances. The most popular and mature sources of such photonic entanglement, e.g., those that leverage spontaneous parametric down conversion (SPDC) generate an entangled quantum state that contains contributions from high-order photon terms that lie outside the span of the dual-rail basis, which is detrimental to most applications. One often uses low pump power to mitigate the effects of those high-order terms. However that reduces the pair generation rate, and the source becomes inherently probabilistic. We investigate a cascaded source that performs a linear-optical entanglement swap between two SPDC sources, to generate a heralded photonic entangled state that has a higher fidelity (to the ideal Bell state) compared to a free-running SPDC source. Further, with the Bell swap providing a heralding trigger, we show how to build a multiplexed source, which despite reasonable switching losses and detector loss and noise, yields a Fidelity versus Success Probability trade-off of a high-efficiency source of high-fidelity dual-rail photonic entanglement. We find however that there is a threshold of 1.5dB of loss per switch, beyond which multiplexing hurts the Fidelity versus Success probability.

17. Sub-exponential rate versus distance with time multiplexed quantum repeaters

Prajit Dhara, University of Arizona

Abstract. Quantum communications capacity using direct transmission over length-L optical fiber scales as R~exp(-\alpha L), where \alpha is the fiber's loss coefficient. The rate achieved using a linear chain of quantum repeaters equipped with quantum memories, probabilistic Bell state measurements (BSMs), and switches used for spatial multiplexing, but no quantum error correction was shown to surpass the direct-transmission capacity. However, this rate still decays exponentially with the end-to-end distance, viz., R~exp{-s{\alpha L}}, with s < 1. We show that the introduction of temporal multiplexing i.e., the ability to perform BSMs among qubits at a repeater node that were successfully entangled with qubits at distinct neighboring nodes at different time steps leads to a sub-exponential rate-vs.-distance scaling, i.e., R~exp(-t\sqrt{\alpha L}), which is not attainable with just spatial or spectral multiplexing. We evaluate analytical upper and lower bounds to this rate and obtain the exact rate by numerically optimizing the time-multiplexing block length and the number of repeater nodes. We further demonstrate that incorporating losses in the optical switches used to implement time-multiplexing degrades the rate vs. distance performance, eventually falling back to exponential scaling for very lossy switches. We also examine models for quantum memory decoherence and describe optimal regimes of operation to preserve the desired boost from temporal multiplexing.

18. Demonstration of Optimal Non-Projective Measurements of Binary Coherent States with Photon Counting

Matt DiMario, University of Maryland Joint Quantum Institute and National Institute of Standards and Technology, Maryland

Abstract. Quantum state discrimination is a central problem in quantum measurement theory, with applications spanning from quantum communication to computation. Typical measurement paradigms for state discrimination involve minimum probability of error or unambiguous discrimination with minimum probability of inconclusive results. Alternatively, an optimal inconclusive measurement, a non-projective measurement, achieves minimal error for a given inconclusive probability. This more general measurement encompasses the standard measurement paradigms for state discrimination and provides a much more powerful tool for quantum information and communication. Here, we experimentally demonstrate the optimal inconclusive measurement for the discrimination of binary coherent states based on the proposal in [PRA 86, 052323 (2012)] using linear optics and photon counting. Our demonstration uses coherent displacement operations based on interference, single photon counting, and fast feedback to prepare the optimal feedback policy for the optimal non-projective quantum measurement with high fidelity. This generalized measurement allows us to transition among standard measurement paradigms in an optimal way from minimum error to unambiguous measurements for binary coherent states. We also implement the optimal minimum error measurement for phase coherent states, and propose how to leverage the binary optimal inconclusive measurement for higher dimensional inconclusive discrimination.

19. Unambiguous state discrimination as a limit of an optimal nonprojective measurement of two coherent states

Spencer Dimitroff, University of New Mexico CQuIC

Abstract. Recent advances in quantum measurement theory have shown that it is possible to realize an optimal inconclusive measurement (OIM) of two nonorthogonal coherent states based on linear optics, coherent displacement operations, continuous photon counting, and fast feedback [1]. The OIM is a nonprojective quantum measurement that generalizes the minimum error measurement and the optimal unambiguous measurement of two states, achieving the minimum error probability for a given probability of a conclusive result. We study this optimal nonprojective measurement to implement the zero-error optimal unambiguous state discrimination (USD) of two coherent states. While it is possible to implement optimal USD with a measurement based on displacement operations to the vacuum state without feedback [2], it is known that such an optimal measurement cannot be implemented in the presence of experimental imperfections due to the impossibility of perfectly realizing this displacement operation. We explore the use of the OIM in this zero-error regime, which does not fully rely on displacing to vacuum, with the aim of demonstrating a USD measurement that is more robust to imperfections. This robust quantum measurement can be critical in realistic implementations of quantum communication protocols based on USD of coherent states. [1] K. Nakahira and T. S. Usuda, Phys. Rev. A 86, 052323 (2012). [2] K. Banaszek, Phys. Lett. A 253, 12 (1999). Work supported by NSF Grant #PHY-1653670 and #2137233.

20. A survey of dynamical decoupling sequences on a programmable superconducting quantum computer

Nicholas Ezzell, University of Southern California

Abstract. Dynamical decoupling (DD) is the judicious placement of control pulses to decouple a quantum system from its environment without the need for feedback. Error-suppression through DD sequences is well suited for noisy intermediate-scale quantum (NISQ) era quantum computers (QCs) due to its low resource overhead. In this work we update the status of previous DD surveys in light of recent advancements with cloud-based superconducting qubit devices. In particular, we use an IBM cloud QC with additional control of input pulses through the OpenPulse API. These additional controls allow us to implement certain robust and non-uniformly spaced sequences which--to our knowledge--have not been tested on cloud-based QCs. We use our experimental results to (1) find the best performing DD sequence and (2) test predictions from DD theory. For example, we address the effect of finite-pulse width errors, concatenation depth, and delay duration between pulses on DD performance. *This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Department of Energy Computational Science Graduate Fellowship under Award Number DE-SC0020347.

21. Entanglement from tensor networks on a trapped-ion QCCD quantum computer

Michael Foss-Feig, Honeywell

Abstract. The ability to selectively measure, initialize, and reuse qubits during a quantum circuit is a crucial ingredient in scalable (error-corrected) quantum computation. Recently, it has been realized that these tools also enable "holographic" algorithms that map the spatial structure of certain tensor-network states onto the dynamics of a quantum circuit, thereby achieving dramatic resource savings when using a quantum computer to simulate many-body systems with limited entanglement. Here we explore another significant benefit of the holographic approach to quantum simulation: The entanglement structure of an infinite system, specifically the half-chain entanglement spectrum, can be extracted from a data-compressed register of "bond qubits" encoding a matrix-product state. We demonstrate this idea experimentally on a trapped-ion QCCD quantum computer by computing the near-critical entanglement entropy of the transverse-field Ising model directly in the thermodynamic limit, and show that the phase transition becomes very quickly resolved upon expanding the bond qubit register.

22. Shadow tomography of continuous-variable quantum systems

Srilekha Gandhari, Joint Center for Quantum Information and Computer Science, NIST/University of Maryland

Abstract. Shadow tomography is a framework for constructing classical descriptions of quantum states, called classical shadows, with powerful methods to bound the estimators used. Classical shadows are well-studied in the discrete-variable case such as for qubits. Here, we extend the framework of shadow tomography to continuous-variable quantum systems, such as optical modes and harmonic oscillators. These are important quantum resources with continuous variable descriptions. We show how to adapt homodyne and heterodyne experimental methods from optical tomography to efficiently construct classical shadows for finite-dimensional estimations of the infinite dimensional unknown state. We provide rigorous bounds on the variance of estimating density matrices from homodyne and heterodyne measurements. We show that to reach a desired precision of the classical shadow of an N-photon density matrix with high probability, homodyne tomography requires on the order of N^5 measurements whereas heterodyne tomography requires only on the order of N^4 measurements. We discuss extensions to multimode classical shadows and consider practical conditions required to realize our protocols.

23. Improving quantum state detection with adaptive sequential observations

Shawn Geller, National Institute of Standards and Technology, Boulder

Abstract. For many quantum systems intended for information processing, one detects the logical state of a qubit by integrating a continuously observed quantity over time. For example, ion and atom qubits are typically measured by driving a cycling transition and counting the number of photons observed from the resulting fluorescence. Instead of recording only the total observed count, one can observe the photon arrival times and get a state detection advantage by using the temporal structure in a model such as a Hidden Markov Model. We initiate the study of what further advantage may be achieved by applying pulses to adaptively transform the state during the observation. We give a three-state example where adaptively chosen transformations yield a clear advantage, and we compare performances on a prototypical ion example. We make available a software package that can be used for exploration of temporally resolved strategies with and without adaptively chosen transformations.

24. Continuous-variable cluster states in hybrid optomechanical systems

Anuvetha Govindarajan, University of California Merced

Abstract. Cluster states are a class of entangled states that are maximally connected and highly resilient against noise. They can be a useful resource for measurement-based quantum computing. In this work, we present a unique approach for generating dual-rail continuous-variable cluster states in a hybrid optomechanical system that is composed of cavity modes, mechanical modes, and qubits. The scheme only relies on local interaction between neighboring modes and can be scalable to a large number of cavity and mechanical modes. We characterize the multipartite entanglement in the generated states and show that the states are robust against cavity and mechanical losses. As a direct application, we demonstrate that the entanglement in the cluster states can be transferred to two distant qubits by manipulating the cluster state via measurement.

25. Encoding a qubit in a spin

Jonathan Gross, Google

Abstract. I present a new approach for designing quantum error-correcting codes that guarantees a physically natural implementation of Clifford operations. Inspired by the scheme put forward by Gottesman, Kitaev, and Preskill for encoding a qubit in an oscillator, in which Clifford operations may be performed via Gaussian unitaries, this approach yields new schemes for encoding a qubit in a large spin in which single-qubit Clifford operations may be performed via spatial rotations. I construct all possible examples of such codes, provide universal-gate-set implementations using Hamiltonians that are at most quadratic in angular-momentum operators, and derive criteria for when these codes exactly correct physically relevant noise channels to lowest order. This technique yields a minimal eight-dimensional code in spin-7/2 that exactly corrects these errors, analogous to a perfect block code.

26. Implementing a fast unbounded quantum fanout gate using power-law interactions

Andrew Guo, University of Maryland Joint Quantum Institute

Abstract. The standard circuit model for quantum computation presumes the ability to directly perform gates between arbitrary pairs of qubits, which is unlikely to be practical for large-scale experiments. Power-law interactions with strength decaying as 1/r^α in the distance r provide an experimentally realizable resource for information processing, whilst still retaining long-range connectivity. We leverage the power of these interactions to implement a fast quantum fanout gate with an arbitrary number of targets. Our implementation allows the quantum Fourier transform (QFT) and Shor's algorithm to be performed on a D-dimensional lattice in time logarithmic in the number of qubits for interactions with α ≤ D. As a corollary, we show that power-law systems with α ≤ D are difficult to simulate classically even for short times, under a standard assumption that factoring is classically intractable. Complementarily, we develop a technique to give a general lower bound, linear in the size of the system, on the time required to implement the QFT and the fanout gate in systems that are constrained by a linear light cone. This allows us to prove an asymptotically tighter lower bound for long-range systems than was possible with previously available techniques.

27. Arbitrary Splitting of 1D Ion Chains to Scale Quantum Computing Systems.

Bahaa Harraz, University of Maryland Joint Quantum Institute

Abstract. In order to scale a trapped-ion quantum computer to large numbers of qubits, it is necessary to precisely and consistently manipulate ions in space. Through the manipulation of a chain of 15 171Yb+ ions, we demonstrate the ability to arbitrarily split a chain of ions into three wells, operate on one of the wells, and merge the wells together: all while maintaining the coherence of the qubits. The implementation of such methods in existing trapped-ion systems will allow for longer and more complex quantum circuits in the near future, following recent work on quantum circuit verification [ref1], measurement-induced phase transitions [ref2], and NMR simulation [ref3]. Ref1 - https://arxiv.org/abs/2107.11387 Ref2 - https://arxiv.org/abs/2106.05881 Ref3 - https://www.nature.com/articles/s42256-020-0198-x This work is supported by the ARO with funding from the IARPA LogiQ program, the NSF Practical Fully-Connected Quantum Computer program, the DOE program on Quantum Computing in Chemical and Material Sciences, the AFOSR MURI on Quantum Measurement and Verification, and the AFOSR MURI on Interactive Quantum Computation and Communication Protocols.

28. Quantum simulation of many-body non-equilibrium dynamics in tilted 1D Fermi-Hubbard models

Bharath Hebbe Madhusudhana, Max-Planck-Institute for Quantum Optics

Abstract. Thermalization of isolated quantum many-body systems is a redistribution of quantum information within the system in such a way that macroscopic variables remain experimentally accessible and microscopic variables recede into inaccessible parts of the Hilbert space. Therefore, a question of fundamental importance to quantum information theory is when do quantum many-body systems fail to thermalize, i.e., feature non-ergodicity. A useful test-bed for the study of non-ergodicity is the tilted Fermi-Hubbard model, which is directly accessible in experiments with ultracold atoms in optical lattices. Here we experimentally study non-ergodic behavior in this model by tracking the evolution of an initial charge-density wave over a wide range of parameters, where we find a remarkably long-lived initial-state memory [1]. In the limit of large tilts, we identify the microscopic processes which the observed dynamics arise from. These processes constitute an effective Hamiltonian and we experimentally show its validity [2]. This effective Hamiltonian features the novel phenomenon of Hilbert space fragmentation. In the intermediate tilt regime, while these effective models are no longer valid, we show that the features of fragmentation are still vaguely present in the dynamics. Finally, we explore the relaxation dynamics of the imbalance in a 2D tilted Fermi-Hubbard system. [1.] Sebastian Scherg et al. arXiv:2010.12965 [2.] Thomas Kohlert et al. arXiv:2106.15586

29. Dynamical subset sampling of quantum error correcting circuits

Sascha Heußen, RWTH Aachen University

Abstract. Quantum error correcting stabilizer codes enable protection of quantum information against errors during storage and processing. Efficiently simulating faulty gate operations poses numerical challenges beyond circuit depth or large numbers of qubits. More efficient simulation of non-deterministic quantum error correcting protocols, such as Shor-type error correction or flag-qubit based fault-tolerant circuits where intermediate measurements and classical feedback determine the actual circuit sequence to perform the protocol, becomes feasible via dynamical subset sampling. As an importance sampling technique, dynamical subset sampling allows to effectively make use of computational resources to only sample the most relevant sequences of quantum circuits in order to estimate a protocol's logical failure rate with well-defined error bars instead of post-selecting on classical measurement data. We outline the method along with two examples that demonstrate its capabilities to reach a given target variance on the logical failure rate with five orders of magnitude fewer samples than Monte Carlo simulation. Our method naturally allows for efficient simulation of realistic multi-parameter noise models describing faulty quantum processor architectures, e.g. based on trapped ions.

30. Error-correcting entanglement swapping using a practical logical photon encoding

Paul Hilaire, Virginia Tech

Abstract. Using linear optics, a two-photon Bell state measurement (BSM) only succeeds with a probability of at best 50%. This limits the performances of many quantum repeater (QR) protocols that use these photonic BSMs either for quantum teleportation or for entanglement swapping. QRs also require loss-tolerance to transfer information at a higher rate than direct fiber transmission and error-correction. A loss-tolerant and error-corrected BSM would therefore enable efficient all-photonic QR schemes. By using either ancillary photonic qubits or nonlinear interaction with atoms, it is possible to overcome the 50% limit but these solutions are neither loss-tolerant nor fault-tolerant. To achieve so, we need to logically encode the photonic qubits and thus to perform a logical BSM. Here, we propose to use a photonic tree graph state which is a logical encoding that can be efficiently produced with a few matter qubits. We develop two logical BSM schemes, denoted "static" and "dynamic", that are both loss-tolerant and error-corrected. In the static protocol, each photonic qubit of a tree graph state is measured with its corresponding qubit of the second tree via standard two-photon BSM which can be implemented with static linear optics. The dynamic protocol requires feedforward and yields better performances. These results can be directly applied to an all-photonic QR protocol that is fault-tolerant, a feature that was lacking in the original proposal

31. Quantum Pattern Matching Using IBM Qiskit

Md. Sakibul Islam, Shahjalal University of Science and Technology

Abstract. There are several classical algorithms for pattern matching like KMP but to har-ness the advantages of quantum there needs a similar kind of algorithm. Re-searchers from IonQ and University of Maryland College Park proposed a quantum algorithm to search pattern in a string or other databases. In this work, an effort was given to verify the algorithm in circuit level using IBM qiskit for a length of 8 string. To avail the quantum performances for pattern searching we implemented a general oracle of grover's algorithm. To proof our circuit developed in Qiskit we plotted the pattern matching with string in histogram.

32. Automation of magneto-optical trap laser tuning for neutral atom quantum computing

Andrew Jarymowycz, California Polytechnic State University

Abstract. Neutral atom quantum computers use atoms trapped by light as qubits. Our group computationally explores light patterns useful for quantum computing [1-3] and investigates their properties experimentally. To hold atoms in light patterns, they must first be cooled to sub-mK temperatures by a magneto-optical trap (MOT) using lasers and magnetic fields. The MOT requires two lasers (trap and repump) tuned to the correct atomic transitions to within 1:100,000,000. We developed Python code to automate the time-consuming process of tuning our diode lasers. Tuning steps are: Atomic vapor, with atoms moving in all directions at room temperature speeds, has a spectrum broadened by the Doppler effect. Thus, we first make the detailed, hyperfine energy levels of our 87Rb atoms visible using saturated absorption. Next, to identify the correct energy level transitions for each laser, we calculate the conversion factor between the applied tuning voltage and the resulting laser frequency by comparing the narrow peaks of a 300 MHz free spectral range Fabry-Perot interferometer to the saturated absorption spectrum. Lastly, the optoelectronic feedback signal is adjusted to place zero feedback voltage at the correct laser frequency. The only remaining manual step is flipping the lock switch to close the feedback loop. Now the cold atom sample is ready to be loaded into the light patterns for quantum computing. [1] PRA 73, 013409 (2006), [2] PRA 82, 063420 (2010), [3] PRA 83, 023408 (2011).

33. Semiclassical Dynamics, Classical-Quantum Control, and Quantum Computation

Ilon Joseph, Laurence Livermore National Laboratory

Abstract. Quantum information technology relies on essentially classical hardware to control the underlying quantum hardware. Yet, achieving a fully self-consistent coupling of classical and quantum subsystems that correctly predicts backaction effects is challenging from both the physical and mathematical perspectives. In this work, a self-consistent semiclassical-quantum coupling theory is derived by considering a bipartite quantum system and taking the semiclassical limit for one of the subsystems. This approach yields a self-consistently coupled Hamiltonian that evolves via the configuration space version of the Koopman-van Hove (KvH) Hamiltonian [1,2]. There is a natural mapping from configuration space to the classical phase space where the evolution becomes both linear and unitary. This yields a straightforward proof of the classical-quantum coupling methodology proposed in [1]. Moreover, the semiclassical version has different boundary conditions than the classical version that improve accuracy by generating “quantum” effects such as finite tunneling amplitudes in classically forbidden regions and the Einstein-Brillouin-Keller quantization conditions. Finally, a modified version of the quantum algorithm proposed in [2] can naturally be used to efficiently simulate coupled classical-quantum and semiclassical-quantum dynamics on a quantum computer. [1] D. I. Bondar, F. Gay-Balmaz, C. Tronci, Proc. R. Soc. A 475 20180879 (2019). [2] I. Joseph, Phys. Rev. Research 2, 043102 (2020).

34. Observing a Changing Hilbert-Space Inner Product

Salini Karuvade, University of Calgary

Abstract. In quantum mechanics, physical states are represented by rays in Hilbert space, which is a vector space imbued by an inner product whose physical meaning arises as the overlap between a pure state (description of preparation) and a projective measurement. However, current quantum theory does not formally address the consequences of a changing inner product during the interval between preparation and measurement. We establish a theoretical framework for such a changing inner product, which we show is consistent with standard quantum mechanics. Furthermore, we show that this change is described by a quantum channel, which is tomographically observable, and we elucidate how our result is strongly related to the exploding topic of PT-symmetric quantum mechanics. We explain how to realize experimentally a changing inner product for a qubit in terms of a qutrit protocol with a unitary channel.

35. RLD Fisher Information Bound for Multiparameter Estimation of Quantum Channels

Vishal Katariya, Louisiana State University

Abstract. One of the fundamental tasks in quantum metrology is to estimate multiple parameters embedded in a noisy process, i.e., a quantum channel. In this paper, we study fundamental limits to quantum channel estimation via the concept of amortization and the right logarithmic derivative (RLD) Fisher information value. Our key technical result is the proof of a chain-rule inequality for the RLD Fisher information value, which implies that amortization, i.e., access to a catalyst state family, does not increase the RLD Fisher information value of quantum channels. This technical result leads to a fundamental and efficiently computable limitation for multiparameter channel estimation in the sequential setting, in terms of the RLD Fisher information value. As a consequence, we conclude that if the RLD Fisher information value is finite, then Heisenberg scaling is unattainable in the multiparameter setting.

36. Localization and topology in Floquet systems with quantized drive

Michael Kolodrubetz, University of Texas at Dallas

Abstract. Time-periodic (Floquet) drive has become a powerful tool to engineer phases of matter, both in equilibrium and far from equilibrium. In this talk, I show that treating drive photons as quantized degrees of freedom, as in cavity QED, leads to interesting phases of coupled light and matter. This is non-trivial even for such “basic” phenomena as many body localization [Ng and MHK, PRL 122, 240402 (2019)], where the localizing interactions compete against global coupling to the quantized cavity mode. I will show that such a mapping is fundamental for certain topological phases of matter, such as the Floquet-Thouless energy pump [MHK et al, PRL 120, 150601 (2018)], where topological pumping naturally occurs by a quantized backaction on the drive. These phases may be realizable experimentally in many body cavity QED and superconducting circuit QED.

37. Eigenstate entanglement in integrable collective spin models

Meenu Kumari, Perimeter Institute

Abstract. The average entanglement entropy (EE) of the energy eigenstates in non-vanishing partitions has been recently proposed as a diagnostic of integrability in quantum many-body systems. We examine this diagnostic in the class of collective spin models characterized by permutation symmetry in the spins. The well-known Lipkin-Meshov-Glick (LMG) model is a paradigmatic integrable system in this class. We calculate analytically the average EE of the Dicke basis in any non-vanishing bipartition, and show that in the thermodynamic limit, it converges to 1/2 of the maximal EE in the corresponding bipartition. Using finite-size scaling, we numerically demonstrate that the aforementioned average EE in the thermodynamic limit is universal for all parameter values of the LMG model. Our analysis illustrates how the value of the average EE in the thermodynamic limit may be a robust criteria for identifying integrability.

38. Disentangling native errors, Trotter errors, chaos, and dynamical instabilities in quantum simulation

Kevin Kuper, University of Arizona

Abstract. Noisy, intermediate-scale quantum (NISQ) processors are improving rapidly but remain well short of requirements for fault tolerant computation. In the meantime, much effort has focused on the development of quantum simulators that operate without error correction. So-called "digital" processors can simulate non-native Hamiltonians through Trotterization, wherein the evolution is broken into discrete steps using a Trotter-Suzuki expansion. When simulating the evolution over a total time T, this introduces Trotter errors that scale inversely with the number of time steps. For optimal performance, this must be weighed against the native errors inherent to the processor hardware implementation, which scale roughly in proportion with the number of time steps. Notably, the optimal step size can be affected by the appearance of chaos in the Trotterized dynamics, which leads to hypersensitivity to both Trotter and native errors. We investigate each of these error regimes in quantum simulations running on a small, highly-accurate quantum processor based on the combined electron-nuclear spins of a Cs-133 atom. As a concrete example, we focus on the Lipkin-Meshkov-Glick Hamiltonian, which when Trotterized becomes the Quantum-Kicked-Top - a well-studied system that exhibits chaos and dynamical instability. Finally, we show that OTOC measurements can be implemented and used to identify the presence of chaos and instabilities as they appear and disappear with changing Trotter step size.

39. Choosing sequence lengths for single-shot-randomized Clifford benchmarking

Alex Kwiatkowski, University of Colorado

Abstract. We analyze randomized benchmarking of Clifford gates when a new random sequence is drawn for each single shot of the experiment, where a single shot consists of a state preparation followed by a gate sequence and then a measurement. We present calculations of Fisher-efficient choices of sequence lengths for a single-qubit experiment that minimize the total experiment time needed to achieve a fixed statistical uncertainty and take into account the different time-costs of shots with different sequence lengths. We also describe strategies for choosing sequence lengths that can help diagnose the presence of gate errors that are not constant throughout an experiment.

40. Multiplicative Quantum Relative Entropy Comparison and Quasi-factorization

Nicholas LaRacuente, University of Chicago

Abstract. Purely multiplicative comparisons of quantum relative entropy are desirable but challenging to prove. We show such comparisons for relative entropies between comparable densities, including subalgebra-relative entropy and its perturbations. These inequalities are asymptotically tight in approaching known, tight inequalities as perturbation size approaches zero. Following, we obtain a quasi-factorization inequality, which compares the relative entropies to each of several subalgebraic restrictions with that to their intersection. We apply quasi-factorization to uncertainty-like relations and to conditional expectations arising from graphs. Quasi-factorization yields decay estimates of optimal asymptotic order on mixing processes described by finite, connected, undirected graphs.

41. Three-Dimensional Numerical Simulations of BEC Transport Using Shortcuts to Adiabaticity

Chris Larson, Miami University

Abstract. We report on our numerical simulations of high-fidelity, fast quantum control of Bose-Einstein condensates (BECs) as we extend them to full 3D solutions of the Gross-Pitaevskii equation. We simulate a 3D painted potential that provides complete confinement of the atoms. Painted potentials allow for arbitrary and dynamic traps, which control the spatial transport of the BEC. To achieve high quantum fidelity after transport, we implement shortcuts-to-adiabaticity (STAs) to design the BEC trajectory in our simulations. STAs allow fast movement while suppressing excitations that can result due to the rapid transitions of the quantum state. In our 3D simulations, quantum fidelities resulting from different, experimentally viable transport times and trap depths are compared. Using the measured frequencies of the different traps and by simulating transport over multiples of those periods, we seek to identify and analyze a possible cause of lower than expected post-transport fidelities.

42. Progress towards chip-scale transverse laser cooling of thermal atomic beams

Chao Li, Georgia Institute of Technology

Abstract. We present progress toward on-chip laser cooling of thermal rubidium atomic beams in an integrated platform using MEMS collimators and mirrors. A thin silicon capillary array tailors the transverse velocity distribution of thermal atoms into the capture range of laser cooling within a few millimeters. Immediately then, we use micromirrors to precisely overlap a strong elliptical standing wave (e.g., 0.2 mm x 5 mm), i.e., the stimulated blue molasses, with the atomic beam sheet. As a result, transverse velocity spread can fall in the range of 1 m/s within the sub-centimeter travel distance. Furthermore, such an on-chip approach achieves high light intensity with relatively low laser power consumption due to the needed small cross-section area for miniatured atomic beams. So far, we have fabricated silicon collimators, performed stringent tests over the micromirrors’ resistance to alkali attack, and implemented Doppler-sensitive Raman spectroscopy to diagnose the transverse velocity distribution thoroughly. This on-chip hybrid of passive and active collimation paves the way towards a high-brightness atomic beam source that can find its applications in making compact atomic instruments, such as portable atomic beam-based clocks and gyroscopes.

43. Entangled photon factory: How to generate quantum resource states from a minimal number of quantum emitters

Bikun Li, Virginia Tech

Abstract. Multi-photon graph states are a fundamental resource in quantum communication networks, distributed quantum computing, and sensing. These states can in principle be created deterministically from quantum emitters such as optically active quantum dots or defects, atomic systems, or superconducting qubits. However, finding efficient schemes to produce such states has been a long-standing challenge. Here, we present an algorithm that, given a desired multi-photon graph state, determines the minimum number of quantum emitters and precise operation sequences that can produce it. The algorithm itself and the resulting operation sequence both scale polynomially in the size of the photonic graph state, allowing one to obtain efficient schemes to generate graph states containing hundreds or thousands of photons.

44. Universality of Dicke superradiance in arrays of quantum emitters

Stuart Masson, Columbia University

Abstract. Collective effects in subwavelength atomic ensembles lead to exotic optical properties that have begun to be explored in experimental systems. Here, we investigate the physics of collective decay in ordered atomic systems, going beyond single-excitation phenomena. The decay of a fully inverted ensemble of atoms at the same spatial location is well known: the emitted light initially grows in intensity and photons are emitted in a short burst, so-called Dicke superradiance. However, atoms separated by large distances act independently and their decay is exponential, monotonically decreasing in time. We connect these separate regimes by considering mesoscopic atomic arrays. We show that the superradiant burst survives at small interatomic distances, though with a reduced amplitude, and late decay becomes strongly subradiant and directional. As the interatomic separation is increased, the size of the burst decreases, eventually disappearing. The crossover between these regimes can be identified solely by investigating very early dynamics. We also derive a concise expression -- which is applicable to arrays of any dimensionality and topology -- that allows us to predict the critical distance beyond which superradiance disappears. This allows for predictions to be made for large atom numbers, and identification of geometries where this physics could be probed experimentally.

45. Tunable Couplers For Superconducting Qubits

Nicholas Materise, Colorado School of Mines

Abstract. The integration of parametric tunable couplers in superconducting qubits has enabled myriad demonstrations of quantum advantage in existing planar architectures. Recent progress in semiconductor-superconductor integration promises alternative tuning modalities capable of replacing existing flux tuning schemes. We present an outline for a gate-tunable, voltage controlled coupler realized with a two-dimensional electron gas in an III-V semiconductor heterostructure. We give an estimate of the on/off ratio and dielectric-limited loss from the inclusion of the 2D coupler in a two qubit system.

46. How efficiently can we simulate local observables in the open system dynamics of Ising models?

Anupam Mitra, University of New Mexico CQuIC

Abstract. In the quantum simulation of many-body systems, we are often interested in estimating expectation values of local observable associated with order parameters. It is believed that these observables are likely to be more robust to decoherence, making them more accessible to noisy intermediate scale quantum (NISQ) simulators. Recent work has shown that approximate simulations of NISQ devices can be tractable [Phys. Rev. X 10, 041038], especially in the presence of decoherence [Quantum 4, 318; Phys. Rev. Research 3, 023005]. In this work, we explore estimating local reduced density operators (marginals). We show that the Hilbert-Schmidt distance between appropriate marginals can be used to upper bound the error in estimating expectation values of observables. Focusing on quantum simulation of quench dynamics of Ising spin chains in 1D on a noisy quantum device, we model open quantum system dynamics with single-spin local, unital decoherence map given by a Lindblad master equation, which we solve using quantum trajectories and matrix product state (MPS) time evolution methods. We numerically find that marginals can be well approximated using MPS with modest bond-dimension, giving numerical evidence to show that local observables can be efficiently approximated. Moreover, decoherence permits a more aggressive MPS truncation for approximating local marginals, suggesting that classical simulation of local observables may be tractable and robust to unital decoherence maps.

47. Weak measurements reconcile incompatible observables

Jonathan Monroe, Washington University in St. Louis

Abstract. Traditional uncertainty relations dictate a minimal amount of noise in incompatible projective quantum measurements. The noise is often thought to come from the measurements' failure to commute. However, not all measurements are projective. In particular, weak measurements are minimally invasive tools for obtaining partial state information without projection. In this talk, I'll describe an experiment in which such measurements can reconcile two incompatible (non-commuting) strong measurements. The weak measurements' slight back action on the state accounts for a majority of the reconciliation. The measurements obey an entropic uncertainty relation based on generalized measurement operators. In this relation a weak value appears, lowering the uncertainty bound.

48. Lefschetz thimble quantum Monte Carlo for spin systems

Connor Mooney, George Mason University

Abstract. Monte Carlo simulations are often useful tools for modeling quantum systems, but in some cases they run into the sign problem, where the translation from quantum to classical systems results in an oscillating phase attached to the probabilities. This sign problem generally leads to an exponential slow down in the time taken by a Monte Carlo algorithm to reach any given level of accuracy, and it has been shown that completely solving the sign problem for any given quantum system is an NP-hard task. Several techniques exist for mitigating the slow down associated with the sign problem for specific cases, however, and one effective method in high energy field theories has been to deform the Monte Carlo simulation's plane of integration onto Lefschetz thimbles, that is, complex hypersurfaces of stationary phase. We extend this methodology to discrete spin systems by utilizing spin coherent state path integrals to re-express the discrete spin system's partition function in a continuous variable setting. This translation introduces additional challenges into the Lefschetz thimble method which we address. We show that these techniques do indeed work to lessen the sign problem in spin systems and demonstrate them on simple systems with sign problems.

49. Trapped-ion metastable qubits: the scheme and scattering errors

Isam Moore, University of Oregon

Abstract. While all of the basic primitives required for universal quantum computing (QC) have been demonstrated in trapped-ion qubits with high fidelity, it is currently not possible to simultaneously realize the highest achieved fidelities in a single ion species - typically two species are required. This is a serious impediment to the development of practical quantum computers. However, there is the possibility for achieving high-fidelity, full functionality in a single species: augmentation of an existing species with new functionality via novel encoding schemes in metastable states (“m-type qubits”). This allows for user-selectable, ion-specific activation of the necessary functions on demand (e.g. information storage, qubit coupling to motion, cooling, and state preparation and measurement). Photon scattering is a common source of error in such schemes and has been investigated in more common ion-trap QC architectures; however, photon-scattering-induced errors have not yet been characterized in m-type qubits, where large detuning requirements necessitate including effects which have been ignored in past studies of qubit schemes in smaller detuning regimes. We present an introduction to the m-type scheme as well as calculations of scattering errors in m-type qubit gates. This research was supported by the U.S. Army Research Office through grant W911NF-20-1-0037.

50. Quantum chaos and emergent time crystal phases in driven collective spin systems

Manuel Munoz-Arias, University of New Mexico CQuIC

Abstract. We introduce a family of kicked p-spin models describing the Floquet dynamics of a collective spin undergoing Larmor precession and p-order twisting. The twisting corresponds to all-to-all p-body interactions between N constituent spin-½ particles of the collective J=N/2 spin.   We fully characterize the classical nonlinear dynamics of these models which describes the motion of a top on sphere, including the transition to global Hamiltonian chaos. The classical analysis allows us to build a classification for this family of models, distinguishing between p = 2 and p > 2, and between models with odd and even p's. Many signatures of these nonlinear dynamics carry over to the quantum domain leading to complex behavior. We present a connection between resonances and bifurcations in the mean-field description of these models and the emergence of Floquet time crystal (FTC) phases. We provide evidence that a given kicked p-spin gives rise to at least one FTC phase in the vicinity of the 1-p bifurcation, with more FTC phases being possible depending of the value of p. Finally in models hosting a least two different FTC phases we introduced the idea of time-crystal switching.

51. Preparing a cold atomic ensemble with high optical depth for a light matter interface in free space

Jacob Nelson Nelson, University of New Mexico CQuIC

Abstract. Atomic ensembles provide a versatile light matter interface for the preparation of complex quantum states. Light-matter interfaces have allowed for the preparation of highly squeezed spin states and complex non-Gaussian states based on measurement backaction for metrology and quantum information processing. We are working towards preparing a large atomic ensemble of cold cesium atoms with high optical depth in free space in a low loss system to increase entanglement between light and matter, which can result in a high efficiency for quantum state preparation. We investigate methods to optimize the optical depth and temperature of the atomic ensemble by controlling the cooling laser parameters and magnetic fields in our magneto optical trap. Our goal is to achieve an optical depth of 300 or higher, which will enhance the measurement strength for quantum state preparation based on measurement backaction.

52. Classification of Small Triorthogonal Codes

Sepehr Nezami, California Institute of Technology

Abstract. Triorthogonal codes are a class of quantum error correcting codes used in magic state distillation protocols. We classify all triorthogonal codes with n + k ≤ 38, where n is the number of physical qubits and k is the number of logical qubits of the code. We find 38 distinguished triorthogonal subspaces and show that every triorthogonal code with n + k ≤ 38 descends from one of these subspaces through elementary operations such as puncturing and deleting qubits. Specifically, we associate each triorthogonal code with a Reed-Muller polynomial of weight n+k, and classify the Reed-Muller polynomials of low weight using the results of Kasami, Tokura, and Azumi and an extensive computerized search. In an appendix independent of the main text, we improve a magic state distillation protocol by reducing the time variance due to stochastic Clifford corrections.

53. Strong parametric dispersive shifts in a statically decoupled multi-qubit cavity QED system

Taewan Noh, National Institute of Standards and Technology, Boulder

Abstract. Engineering dispersive interactions in cavity quantum electrodynamic (QED) systems is important for developing high fidelity qubit measurement, fast logic gates, state preparation and error correction protocols. In this talk, we describe our experiments with two transmon qubits coupled to a lumped element cavity through a shared dc-SQUID. We show that our system can provide protection from decoherence processes for both qubits by minimizing coupling to the readout cavity during qubit operations. Parametric driving of the SQUID with an oscillating magnetic flux enables independent, dynamical tuning of the qubit-cavity interactions for both qubits. The strength and detuning of this interaction can be fully controlled through the choice of the parametric pump frequency and amplitude. As a practical demonstration, we present the result of pulsed parametric dispersive readout of each qubit while statically decoupled from the cavity. Furthermore, we show that it is possible to perform joint-readout of two-qubit states using our parametric approach. In addition to realizing many features of standard cavity QED, this parametric scheme creates a new tunable cavity QED framework for developing quantum information systems with various future applications, such as entanglement and error correction via multi-qubit parity readout, state and entanglement stabilization, and parametric logical gates.

54. Entanglement detection using neural networks

Diego Alberto Olvera Millán, Universidad Nacional Autónoma de México

Abstract. Entanglement is an important resource for quantum technologies, but its detection and classification cannot be performed efficiently across different kinds of quantum states. In particular, quantum mixed states require computationally demanding methods, such as those of convex roof constructions. In this work, I train an artificial neural network(ANN) to perform the classification between entangled and separable two qubit states, using expected values of products of Pauli matrices as the entries of the feature vector, x. The training is performed using only random pure states sampled from the invariant Haar measure. It is found that, using the 15 linearly independent products of Pauli matrices, an accuracy of 98% is achieved for states drawn from the same distribution, and the accuracy for states drawn from the Bures distribution can reach up to 80% (after applying regularization to the model). Using 4 non orthogonal products of the Pauli matrices, an accuracy of 91% is achieved for states sampled from the Haar distribution, and, when dealing with states sampled from the Bures distribution with purity and concurrence higher than 0.7, an accuracy of up to 84% is achieved.

55. Implementing qudit quantum logic gates on nuclear spins in Strontium-87 atoms

Sivaprasad Omanakuttan, University of New Mexico CQuIC

Abstract. We study the ability to implement quantum logic gates on states of the I=9/2 nuclear spin in 87Sr, a d=10 dimensional qudit, using quantum optimal control. For one-qudit SU(10) gates through a combination of nuclear spin-resonance and a tensor AC-Stark shift, by solely modulating the phase of a radio-frequency magnetic field, the system is quantum controllable. We numerically study the quantum speed-limit, optimal parameters, and the fidelity of arbitrary state preparation and full SU(10) maps including the presence of decoherence due to optical pumping. We also study the use of robust control to mitigate some dephasing due to inhomogeneities in the light shift. We find that, we can prepare an arbitrary Haar-random state with average fidelity 0.9992, and an arbitrary Haar-random SU(10) unitary map with average fidelity 0.9923. The addition of Rydberg dressing together with the techniques above allows us to create any symmetric entangling two-qudit gate, such as CPHASE. Our techniques can be used to encode any qudit from d=2 to d=10 in the nuclear spin and thus provides a good platform to explore the various possibilities of quantum information processing of qudits including metrological enhancement with qudits, quantum simulation, and universal quantum computation

56. Halving ion-trap two-qubit gate time while enhancing frequency-drift robustness

Eduardo J. Paez, University of Calgary

Abstract. Two-qubit gate performance is vital for scaling up ion-trap quantum computing, and reducing gate time $\tau$ and gate error rate is achieved by quantum-control methods. We develop a full model for two-qubit gates effected in a Paul trap with multiple ions, described by a master equation incorporating the single-ion quadrupolar effective Rabi frequency, an Autler-Townes shift, off-resonant transitions, Raman and Rayleigh scattering, laser-power fluctuations, motional heating, cross-Kerr phonon coupling and laser spillover with no fitting parameters whatsoever. To minimize $\tau$, while maintaining fixed gate fidelity, we articulate and solve the feasibility problem: given the seven-ion master equation with all ions prepared in the ground state and given vibrational modes initially in a ~$\mu$K thermal state, and with the target being two of the ions evolving over time $\tau$ into a Bell state, and subject to a strict upper bound on laser power, design an amplitude-modulated Raman laser pulse that deterministically yields a close approximation to this Bell state. We solve our problem by global optimization and obtain a pulse sequence for achieving a two-qubit gate in seven trapped ${171}^Yb^{+}$ ions; our pulse executes in half the time required by state-of-the-art methods while not acquiring any further gate error, and our pulse is robust against long-term drift in the frequency detuning and an imperfect initial motional ground state.

57. Inversion interferometry for resolving point sources in real microscopes

Sujeet Pani, University of New Mexico CQuIC

Abstract. Modal imaging can provide unprecedented resolution for imaging point sources, such as single-molecule fluorescent tags that are used to study biological samples. Theoretical work has shown that modal imaging can approach the quantum limits in optical resolution [1], albeit assuming highly idealized measurements described by complex quantum operators. To make the potential of quantum measurements available for super-resolution microscopy it is imperative to understand the critical parameters in optical systems for realizing such quantum measurements under realistic conditions. Our efforts focus on understanding these critical parameters and developing an optical system for super-resolving two incoherent point sources (i.e. fluorophore tags) for the study biological samples. We use a Mach-Zehnder interferometer with optical field inversion to realize image inversion interferometry, in principle allowing for near-optimal imaging of point sources. We use laser light collimated with a microscope objective to mimic point sources imaged in a microscope. We investigate different methods for image inversion, and the effects of aberrations and source bandwidth in the interference visibility. In a second stage will use this setup with a fluorescence microscope, allowing measurements of sub-diffraction limit sized fluorescent beads, single-molecule fluorescence and finally, protein organization and dynamics in biological samples. [1] M. Tsang, et al., Phys. Rev. X 6, 031033 (2016)

58. Quantum optimal control on Jaynes-Cummings lattices

Prabin Parajuli, University of California Merced

Abstract. Engineering quantum states and quantum dynamics that are resilient against noises and errors require the precise manipulation of quantum devices. The well-established adiabatic algorithm often requires a long evolution time to maintain a quantum system in a desired many-body state. However, a long evolution time makes the system prone to decoherence and errors. The quantum optimal control (QOC) technique has been studied to overcome this limitation. Here we study the preparation of many-body ground states in Jaynes-Cummings lattices using the QOC with the Chopped-Random Basis (CRAB) algorithm. Our result shows that high-fidelity state preparation can be achieved using the CRAB algorithm in a significantly shorter time than that of the adiabatic algorithm. We also find the minimal evolution time for achieving high fidelity under a given set of optimization constraints. This study can lead to the development of fast and efficient quantum algorithms for the preparation of many-body states.

59. Quantum optimization heuristics with an application to Knapsack Problems

Natalie Parham, University of Waterloo

Abstract. This paper introduces two techniques that make the standard Quantum Approximate Optimization Algorithm (QAOA) more suitable for constrained optimization problems. The first technique describes how to use the outcome of a prior greedy classical algorithm to define an initial quantum state and mixing operation to adjust the quantum optimization algorithm to explore the possible answers around this initial greedy solution. The second technique is used to nudge the quantum exploration to avoid the local minima around the greedy solutions. To analyze the benefits of these two techniques we run the quantum algorithm on known hard instances of the Knapsack Problem using unit depth quantum circuits. The results show that the adjusted quantum optimization heuristics typically perform better than various classical heuristics.

60. Boolean tensor networks on quantum annealers

Elijah Pelofske, Los Alamos National Laboratory

Abstract. In this research we show that Boolean tensor networks can be computed using quantum annealing (QA). Tensors offer a representation of complex high dimensional data, which is a natural type of data in modern computing and scientific application. A Boolean tensor network represents an input binary tensor (i.e. a tensor containing two categories of data such as True and False) as a product of low-dimensional binary tensors which contain latent features of the high dimensional tensor. We show that quantum annealers can be used to implement three types of Boolean tensor network algorithms; Tensor Train, Tucker, and Hierarchical Tucker. This is accomplished by reducing tensor factorization to a sequence of Boolean matrix factorization problems, which can be expressed as a quadratic unconstrained binary optimization problems that can be solved using a QA. Utilizing this technique allows factorization of arbitrarily large tensors, the only constraint is on the rank of the factorization. Additionally, we demonstrate a novel method called parallel quantum annealing, which allows solves multiple sub-problems at the same time on the QA hardware. Python Quantum Boolean Tensor Networks (pyQBTNs) is a user-friendly software package that makes these methods available for public use. We show that tensors containing up to millions of elements can be efficiently factored using D-Wave quantum annealers.

61. Upper Bound for Device Independent Conference Key Agreement

Aby Philip, Louisiana State University

Abstract. Conference key agreement is a genuinely multipartite cryptographic task. Here, we consider the device independent (DI) approach to account for as many of these problems as possible. In this work, we introduce the multipartite intrinsic non-locality as a resource quantifier for the tripartite scenario of device-independent conference key agreement. We prove that this quantity is additive, convex, and monotone under local operations and common randomness. As one of our technical contributions, we establish a chain rule for multipartite mutual information, which may be of independent interest. We then use this chain rule to establish the multipartite intrinsic non-locality as an upper bound on secret key rate in the general multipartite scenario of DI conference key agreement. We discuss various examples of DI conference key protocols and compare our upper bounds for these protocols with known lower bounds.

62. Using chaotic quantum maps as a test of current quantum computing hardware fidelity*

Max D. Porter, Laurence Livermore National Laboratory

Abstract. We explore the quantum simulation of chaotic dynamics on near term quantum computers using the quantum sawtooth map, the simplest quantum system with rich dynamics and a classically chaotic counterpart [G. Benenti, et al. Phys. Rev. Lett. 87, 227901-1 (2001)]. Depending on the parameters of the map, the competition between classical chaos and quantum interference can either produce diffusive chaotic dynamics or dynamical localization. The two regimes can be distinguished in the presence of noise by their fidelity, which decays exponentially in the chaotic case but only algebraically in the localized case. In the chaotic regime the fidelity decays at the same rate as the Lyapunov exponent, independent of noise magnitude, giving an efficient few-qubit signature of quantum chaos. We use the IBM-Q platform to test the ability of present day quantum hardware to realize these chaotic dynamics. First, we can observe the transition from diffusion to localization when using three qubits, but not more. Next, we demonstrate that with three qubits the fidelity decay distinguishes the dynamics, becoming more rapid as the dynamics become more chaotic in a manner that is independent of gate count. Finally, we show that measuring the Lyapunov exponent as a noise-independent fidelity decay requires at least six qubits with at least an order of magnitude less noise. *Prepared by LLNL under US DOE Contract DE-AC52-07NA27344.

63. Preparing Exact Eigenstates for Benchmarking NISQ Computers

Ken Robbins, Tufts University Department of Physics and Astronomy

Abstract. The Variational Quantum Eigensolver (VQE) is a promising algorithm for Noisy Intermediate Scale Quantum (NISQ) computation. Verification and validation of NISQ algorithms' performance on NISQ devices is an important task. We consider the exactly-diagonalizable Lipkin-Meshkov-Glick (LMG) model as a candidate for benchmarking NISQ computers. We use the Bethe ansatz to construct eigenstates of the trigonometric LMG model using quantum circuits inspired by the LMG's underlying algebraic structure. We construct circuits with depth $O(N)$ and $O(log_2N)$ that can prepare any trigonometric LMG eigenstate of N particles. The number of gates required for both circuits is $O(N)$. The energies of the eigenstates can then be measured and compared to the exactly-known answers.

64. Single-shot Coherent-State optical phase estimation with adaptive photon counting measurements

Marco Antonio Rodriguez Garcia, Universidad Nacional Autónoma de México

Abstract. Single-shot phase estimation in coherent states has a wide variety of applications for quantum information sciences, including quantum metrology, enhanced quantum sensing, and quantum optics. The optimal measurement for phase estimation corresponds to the canonical phase measurement, which minimizes the error of phase estimators. However, physical implementations of the canonical phase measurement for the optical phase are unknown. Here, we investigate a practical realization of non-Gaussian measurements based on adaptive photon-counting and the optimization of coherent displacements operations. We prove that the optimization of coherent displacement operations by a suitable cost function, such as mutual information, allows these non-Gaussian measurements to surpass the standard heterodyne limit and outperform adaptive homodyne measurements. We numerically show that adaptive photon counting measurements maximizing information gain approach the canonical phase measurement in the asymptotic limit with a high convergence rate.

65. Hamiltonian simulation in the low-energy subspace

Burak Sahinoglu, Los Alamos National Laboratory

Abstract. We study the problem of simulating the dynamics of spin systems when the initial state is supported on a subspace of low energy of a Hamiltonian $H$. We analyze error bounds induced by product formulas that approximate the evolution operator and show that these bounds depend on an effective low-energy norm of $H$. We find some improvements over the best previous complexities of product formulas that apply to the general case, and these improvements are more significant for long evolution times that scale with the system size and/or small approximation errors. To obtain our main results, we prove exponentially-decaying upper bounds on the leakage or transitions to high-energy subspaces due to the terms in the product formula that may be of independent interest.

66. Quantum error correction thresholds for the universal Fibonacci Turaev-Viro code

Alexis Schotte, IBM

Abstract. We consider a two-dimensional quantum memory of qubits on a torus which encode the extended Fibonacci string-net code, and devise strategies for error correction when those qubits are subjected to depolarizing noise. Building on the concept of tube algebras, we construct a set of measurements and of quantum gates which map arbitrary qubit errors to the string-net subspace and allow for the characterization of the resulting error syndrome in terms of doubled Fibonacci anyons. Tensor network techniques then allow to quantitatively study the action of Pauli noise on the string-net subspace. We perform Monte Carlo simulations of error correction in this Fibonacci code, and compare the performance of several decoders. For the case of a fixed-rate sampling depolarizing noise model, we find an error correction threshold of 4.7% using a clustering decoder. To the best of our knowledge, this is the first time that a threshold has been estimated for a two-dimensional error correcting code for which universal quantum computation can be performed within its code space via braiding anyons.

67. Observation of stochastic resonance and unidirectional atomic propagation in optical lattices

Casey Scoggins, Miami University

Abstract. By illuminating a 3D dissipative optical lattice with a weak frequency-scanning probe beam and detecting the probe transmission spectrum we observe a resonant enhancement in the directed propagation of cold atoms as we vary the rate of random photon scattering. The directed propagation is induced by probe intensities less than 1% of the total lattice intensity, and occurs perpendicular to probe propagation along a particular symmetry axis of the lattice in the + / - directions. We experimentally characterize this stochastic resonance as a function of probe intensity and lattice well-depth. We show that by varying the angle of incidence of the probe beam on the lattice we can alter the nature of directed propagation - from bidirectional to unidirectional. A simple one-dimensional model reveals how the probe-modified ground state potentials and optical pumping rates conspire to create directed atomic propagation within a randomly diffusing sample. We discuss the possibility for further analysis of the directed motion in the lattice, in terms of the dwell-time in a particular well versus the time taken to hop between adjacent wells, by direct measurement of polarization-sensitive photon statistics of the scattered light. We gratefully acknowledge funding by the Army Research Office.

68. Multiplexed quantum repeaters based on dual-species trapped-ion systems

Kaushik Seshadreesan, University of Arizona

Abstract. Trapped ions form an advanced technology platform for quantum information processing with long qubit coherence times, high-fidelity quantum logic gates, optically active qubits, and a potential to scale up in size while preserving a high level of connectivity between qubits. These traits make them attractive not only for quantum computing, but also for quantum networking. Dedicated, special-purpose trapped-ion processors in conjunction with suitable interconnecting hardware can be used to form quantum repeaters that enable high-rate quantum communications between distant trapped-ion quantum computers in a network. In this regard, hybrid traps with two distinct species of ions, where one ion species can generate ion-photon entanglement that is useful for optically interfacing with the network and the other has long memory lifetimes, useful for qubit storage, has been proposed for entanglement distribution. We consider an architecture for a repeater based on such dual-species trapped-ion systems. We propose and analyze protocols based on spatial and temporal mode multiplexing for entanglement distribution across a line network of such repeaters. Our protocols offer enhanced rates compared to rates previously reported for such repeaters. We determine the ion resource requirements for attaining the enhanced rates, and the best rates attainable when constraints are placed on the number of ions.

69. Quantifying the performance of bidirectional quantum teleportation

Aliza Siddiqui, Louisiana State University

Abstract. Bidirectional teleportation is a fundamental protocol for exchanging quantum information between two parties by means of a shared resource state and local operations and classical communication (LOCC). In this work, we develop two seemingly different ways of quantifying the simulation error of unideal bidirectional teleportation by means of the normalized diamond distance and the channel infidelity, and we prove that they are equivalent. By relaxing the set of operations allowed from LOCC to those that completely preserve the positivity of the partial transpose, we obtain semi-definite programming lower bounds on the simulation error of unideal bidirectional teleportation. We evaluate these bounds for three key examples: when there is no resource state at all and for isotropic and Werner states, in each case finding an analytical solution. The first aforementioned example establishes a benchmark for classical versus quantum bidirectional teleportation. We then evaluate the performance of some schemes for bidirectional teleportation due to [Kiktenko et al., Phys. Rev. A 93, 062305 (2016)] and find that they are suboptimal and do not go beyond the aforementioned classical limit for bidirectional teleportation. We offer a scheme alternative to theirs that is provably optimal. Finally, we generalize the whole development to the setting of bidirectional controlled teleportation, in which there is an additional assisting party who helps with the exchange of quantum information.

70. Trace-preserving tensor network models of quantum channels

Siddarth Srinivasan, University of Washington

Abstract. Modeling quantum channels is a common component in quantum process tomography and quantum error correction. However, the number of parameters needed to fully characterize a quantum channel scales exponentially with the number of qubits. Tensor network factorizations such as locally purified density operators (LPDOs) can serve as a reasonable ansatz for modeling completely-positive trace-preserving quantum channels while keeping the number of parameters tractable. However, a trace-preserving parameterization LPDOs is as of yet unknown. In this work, we investigate trace-preserving parameterizations of LPDOs for modeling quantum channels, with applications in quantum process tomography and quantum error correction.

71. Decay of quantum conditional mutual information in infinite uniform matrix product states.

Pavel Svetlichnyy, Georgia Institute of Technology

Abstract. The quantum conditional mutual information (QCMI), defined by $I(A:C|B)=S(AB)+S(BC)-S(B)-S(ABC)$, where $S(X)$ is the von Neumann entropy of the reduced density operator on $X$, is associated to the problem of state preparation by layers of quantum channels (quantum circuits of finite depth), also known as the quantum Markov property. We investigate the decay of QCMI for the reduced density operators of infinite uniform matrix product states (uMPS). We consider reduced density operators on the domain consisting of three consecutive contiguous regions, $A$, $B$, and $C$. We show that the QCMI is bounded by the function $C\exp(-q|B|+c\ln|B|)$, where $C$, $c$, and $q$ are constants, and $|B|$ is the size of the region $B$, in the asymptotic limit of large $|B|$. Notice that for uMPS QCMI converges to zero in general, unlike quantum correlations and quantum mutual information, for which the assumption of uMPS to be injective is necessary. It is known that for injective uMPS, the decay rate of correlations and quantum mutual information is determined by the spectral gap ($\nu_{\mathrm{gap}}$) of the transfer matrix. We prove that for any uMPS, not necessarily injective, the bound on the asymptotic decay rate of QCMI is $q>\ln{\nu_{\mathrm{gap}}^{-1}}/2$. Our numerical study of example cases suggests, that while the scaling of the decay rate with $\ln{\nu_{\mathrm{gap}}^{-1}}$ is correct, a stricter bound $q>2\ln{\nu_{\mathrm{gap}}^{-1}}$ may hold.

72. Digitized Quantum Annealing with oscillating transverse fields in solving optimization problems

Zhijie Tang, Colorado School of Mines

Abstract. In recent years, there has been a great deal of focus on Quantum Annealing (QA) and its low-depth digitized counterpart, the Quantum Approximate Optimization Algorithm, as promising tools for solving hard optimization problems. Recent work on RFQA, a modification to QA where local, low-frequency oscillations are added to transverse field terms, has shown it is capable of generating noise-tolerant polynomial quantum speedups in solving hard optimization problems. Inspired by the performance of RFQA, we explore a digitized version that could be simulated on universal gate model machines. We apply the digitized version of RFQA to various trial problems using classical numerical simulation and show that digitized RFQA is a potentially promising tool for solving hard problems in optimization and machine learning in digital quantum computing. We also explore how the chosen timestep can change the effective tunneling rate at exponentially small gaps, and how this effect interacts with the acceleration from RFQA.

73. Fidelity Lower Bounds for Graph States from a Small, Constant Number of Measurement Settings

Tyler Thurtell, University of New Mexico CQuIC

Abstract. Benchmarking the performance of noisy intermediate scale quantum (NISQ) devices is a major near-term challenge for quantum information science. A particularly important class of states across many areas of quantum information are the stabilizer or graph states. To aid in the evaluation of the quality of preparation of these states, we introduce operators whose expectation values in a prepared state lower bound the fidelity between that state and a target graph state. The operators we introduce are constructed from linear combinations of efficiently measurable stabilizers. On small graphs, we choose the coefficients to maximize the noise tolerance for a chosen noise model. We then extend the small graph operators to larger graphs so that performance is system size independent under weak, local noise. The result is a new family of operators whose expectation values give fidelity lower bounds for graph states which are tighter while requiring only a constant number of measurement settings.

74. Continuous variable quantum repeater networks based on noiseless linear amplification and mode multiplexing

Ian Tillman, University of Arizona

Abstract. Quantum repeaters are a main focus of study in quantum communications, whose aim is to boost the rates of entanglement and secret key distribution over the direct transmission capacity and increase the range of communications. The so-called “two-way” quantum repeaters, when interspersed between two end nodes, work by probabilistically distributing entanglement across the lossy divided channel segments between every pair of adjacent nodes, followed by entanglement distillation and entanglement swapping at the nodes. In continuous variable (CV) entanglement distribution, previous works have shown that noiseless linear amplifiers (NLAs) can be realized using so-called quantum scissors and, when paired with mode multiplexing, can function as repeaters. In this work we study a general quantum channel consisting of two lossy elementary links, each containing several multiplexed two-mode squeezed vacuum (TMSV) CV entanglement sources and quantum scissor NLAs, whose entanglements are swapped to the end users via dual homodyne detection (DHD). Specifically, we calculate the full two-mode Fock basis expansion of the resulting state given completely general (notably asymmetric) TMSV, NLA, and loss parameters. Using this general state description we analyze optimal placement of a central hub node that connects any pair of users in a 4-user square network and find the percentage of placement area for which this approach surpasses the repeaterless bound or direct DHD without the NLA.

75. Exploring the effect of quantum darwinism

Madhav Tiwari, Queen's University Belfast

Abstract. The measurement problem is arguably one of the most studied, debated, and still controversial problems of quantum foundations. Yet, it is at the very basis of crucial phenomena such as the quantum-to-classical transition. Quantum Darwinism shows the possibility of shedding light on the mechanism underlying the occurrence of the quantum-to-classical transition by considering the emergence of pointer states from quantum superpositions affected by the influences of an environment. Here we are showing the emergence of Quantum Darwinism i.e., redundancy of information about the system in the fragments of the environment using a collisional model where dephasing interactions among the elements of the environment affecting a quantum system are allowed. We analyze the difference in the phenomenology of Quantum Darwinism, whose paradigm excludes environment-environment interactions. In such conditions, we highlight the emergence of an interesting delay in the formation of the information plateau in the typical redundancy plots that are used to characterize the occurrence of Darwinism.

76. Suppression of crosstalk in superconducting qubits using dynamical decoupling

Vinay Tripathi, University of Southern California

Abstract. Currently available superconducting quantum processors with interconnected transmon qubits are noisy and prone to various errors. The errors can be attributed to sources such as open quantum system effects and spurious inter-qubit couplings (crosstalk). The ZZ-coupling between qubits in fixed frequency transmon architectures is always present and contributes to both coherent and incoherent crosstalk errors. Its suppression is therefore a key step towards enhancing the fidelity of quantum computation using transmons. Here we propose the use of dynamical decoupling to suppress the crosstalk, and demonstrate the success of this scheme through experiments performed on several IBM quantum cloud processors. We perform open quantum system simulations of the multi-qubit processors and find good agreement with all the experimental results. We analyze the performance of the protocol based on a simple analytical model and elucidate the importance of the qubit drive frequency in interpreting the results. In particular, we demonstrate that the XY4 dynamical decoupling sequence loses its universality if the drive frequency is not much larger than the system-bath coupling strength. Our work demonstrates that dynamical decoupling is an effective and practical way to suppress crosstalk and open system effects, thus paving the way towards high-fidelity logic gates in transmon-based quantum computers.

77. Generating nonlinearities from conditional linear operations, squeezing, and measurement for quantum computation and super-Heisenberg sensing

Jason Twamley, Okinawa Institute for Science and Technology Graduate University

Abstract. Large bosonic or continuous variable nonlinearities can have numerous applications, ranging from the generation of cat states for quantum computation, through to quantum sensing where the sensitivity exceeds Heisenberg scaling in the resources. However, the generation of ultra-large nonlinearities has proved immensely challenging experimentally. We describe a novel protocol where one can effectively generate large Kerr-type nonlinearities via the conditional application of a linear operation on an optical mode by an ancilla mode, followed by a measurement of the ancilla and corrective operation on the probe mode. Our protocol can generate high-quality Schrödinger cat states useful for quantum computing and can be used to perform sensing of an unknown rotation or displacement in phase space, with super-Heisenberg scaling in the resources. We finally describe a potential experimental implementation using atomic ensembles interacting with optical modes via the Faraday effect.

78. Universal Compiling and (No-)Free-Lunch Theorems for Continuous Variable Quantum Learning

Tyler Volkoff, Los Alamos National Laboratory

Abstract. Variational quantum compiling, where a parameterized quantum circuit V(θ) is trained to learn a target unitary U, is a fundamental primitive for quantum computing. It can be used to find optimal circuits to aid the implementation of larger algorithms. In this presentation, we will introduce algorithms for continuous-variable (CV) variational quantum compiling which are motivated by extending the “no-free-lunch” (NFL) theorems of supervised learning theory to the quantum CV setting. These algorithms utilize readily available Gaussian resources, such as coherent states and two-mode squeezed states. In addition to proving quantum NFL theorems for learning linear optical unitaries and general CV unitaries, we will further prove that the corresponding CV compiling algorithms are trainable, and thus do not exhibit obstructions to scalability such as the barren plateau phenomenon that plagues the finite dimensional case. The results show how theorems from statistical learning theory can be used to motivate near-term CV quantum compiling algorithms. We illustrate the wide applicability of our cost functions for CV quantum compiling by numerically demonstrating efficient learning of arbitrary single-mode Gaussian unitaries, two-mode beamsplitters, and Kerr non-linearities. We expect our algorithms to find applications in a broad range of areas including the characterization of nonlinear optical media, entanglement spectroscopy, and optimal CV circuit design.

79. Quantum Fisher information bounds on precision limits of circular dichroism

Jiaxuan Wang, Texas A&M University

Abstract. Circular dichroism (CD) is a widely used technique for investigating optically chiral molecules, especially for biomolecules. It is thus of great importance that these parameters be estimated precisely so that the molecules with desired functionalities can be designed. In order to surpass the limits of classical measurements, we need to probe the system with quantum light. We develop quantum Fisher information matrix (QFIM) for precision estimates of the circular dichroism and the optical rotary dispersion for a variety of input quantum states of light that are easily accessible in laboratory. The Cramer-Rao bounds, for all four chirality parameters are obtained, from QFIM for (a) single photon input states with a specific linear polarization and for (b) NOON states having two photons with both either left polarized or right polarized. The QFIM bounds, using quantum light, are compared with bounds obtained for classical light beams i.e., beams in coherent states. Quite generally, both the single photon state and the NOON state exhibit superior precision in the estimation of absorption and phase shift in relation to a coherent source of comparable intensity, especially in the weak absorption regime. In particular, the NOON state naturally offers the best precision among the three. We compare QFIM bounds with the error sensitivity bounds, as the latter are relatively easier to measure whereas the QFIM bounds require full state tomography.

80. Topological stability of stored Bessel beams in rubidium vapor via electromagnetically induced transparency

Scott Wenner, Miami University

Abstract. We report on the robustness of different transverse electric-field profiles of light stored in warm dilute alkali vapor via electromagnetically induced transparency (EIT). First, we store Gaussian and Laguerre-Gaussian (LG) modes of light and measure the evolution of the intensity profile. We find that though the LG beam’s topological feature stays intact, the intensity profiles for both modes enlarge owing to degrading effects from diffusion as the beams propagate through the medium. Next, we compare the robustness of these beam-profiles to that of a Bessel beam generated with an axicon, and demonstrate the Bessel profile to be resistant to degradation from diffusion. Finally, we generated an “imposter Bessel” profile by passing a Gaussian beam through a two-ring mask, causing the intensity profile to resemble that of a Bessel beam. We find that this imposter quickly diffuses into a Gaussian profile as the beam starts to propagate through the EIT medium. This work demonstrates a pathway to enhancing atom-based information storage by controlling the electric field profile of the beam. We gratefully acknowledge funding by the Army Research Office.

81. Single mode squeezers for enhancement of transmission estimation with macroscopic quantum states

Timothy Woodworth, University of Oklahoma

Abstract. Transmission estimation is at the heart of a number of techniques such as spectroscopy, calibration of optical elements and efficiency of detectors; can be used to set bounds on quantum interferometry and quantum key distribution; and serves as the estimation parameter for sensors such as resonance or plasmonic sensors. We have previously shown that displaced squeezed states, such as the bright two mode squeezed state (bTMSS) and bright single mode squeezed state (bSMSS), can approach the ultimate sensitivity per photon for transmission estimation given by the quantum Fisher information (QFI), while operating with a number of photons many orders of magnitude larger than the optimal states. Here, we study the effect on the QFI of additional squeezers before and after the system under study. We compare the addition of a single mode squeezer in the path of the optical beam probing the system when using a bSMSS or a bTMSS. As expected, we find that with the additional squeezer the bSMSS has a higher QFI than the bTMSS. However, when classical noise is introduced, we find that the bTMSS is more robust than the bSMSS and can therefore achieve a higher QFI. Additionally, as has been previously shown, the negative effects of imperfect detection can be greatly reduced through the use of a single mode squeezer to anti-squeeze the amplitude quadrature after the probed system.

82. Counterdiabaticity and the quantum approximate optimization algorithm

Jonathan Wurtz, Tufts University

Abstract. The quantum approximate optimization algorithm (QAOA) is a near-term hybrid algorithm intended to solve combinatorial optimization problems, such as MaxCut. QAOA can be made to mimic an adiabatic schedule, and in the $p\to\infty$ limit the final state is an exact maximal eigenstate in accordance with the adiabatic theorem. In this presentation I will make the connection between QAOA and adiabaticity explicit by inspecting the regime of $p$ large but finite. By connecting QAOA to counterdiabatic (CD) evolution, we construct CD-QAOA angles which mimic a counterdiabatic schedule by matching Trotter ``error" terms to approximate adiabatic gauge potentials which suppress diabatic excitations arising from finite ramp speed. In our construction, these ``error" terms are helpful, not detrimental, to QAOA, and QAOA is found to be always at least counterdiabatic, not just adiabatic. While applied specifically to QAOA, the talk will also discuss applications of counterdiabatic ideas to construct better ansatz for more general variational algorithms, such as the VQE for quantum chemistry.

83. Triangle measure of tripartite entanglement

Songbo Xie, University of Rochester

Abstract. Although multi-party entanglement has been demonstrated for a large number of qubits, finding a genuine entanglement measure even for three-qubits has been found difficult, despite the fact that two-qubit entanglement measures have been studied well. One two-qubit entanglement measure is concurrence, satisfying 1 ≥ C ≥ 0. The value 0 implies no entanglement, while the value 1 refers the maximal entanglement of the qubits, in a Bell state. However, a three-qubit entanglement measure has to satisfy the following two conditions to be called a genuine multipartite entanglement (GME) measure: (a) GME=0 for biseparable and product states. (b) GME>0 for nonbiseparable states. Most existing tripartite measures cannot meet the above conditions. We will present the first clear example of a three-qubit GME measure that properly assigns relative state indformation. The concurrence relation developed by Qian, Alonso, and Eberly for an arbitrary three-qubit pure system implies a geometric triangle, with lengths of its edges given by the three one-to-other concurrences in squared form. We call this a concurrence triangle. Concurrence triangle AREA was found by us to satisfy both conditions (a) and (b), and is also a proper GME measure named concurrence FILL. Specific examples will be illustrated to show that different tripartite GME measures are inequivalent and that the triangle-based concurrence Fill is superior to the other GMEs by containing more information about the systems.

84. Spectral Analysis of Product Formulas in Quantum Simulation

Changhao Yi, University of New Mexico CQuIC

Abstract. We consider Hamiltonian simulation using the first order Lie-Trotter product formula under the assumption that the initial state has a high overlap with an energy eigenstate, or a collection of eigenstates in a narrow energy band. This assumption is motivated by quantum phase estimation (QPE) and digital adiabatic simulation (DAS). Treating the effective Hamiltonian that generates the Trotterized time evolution using rigorous perturbative methods, we show that the Trotter step size needed to estimate an energy eigenvalue within precision $\epsilon$ using QPE can be improved in scaling from $\epsilon$ to $\epsilon^{1/2}$ for a large class of systems. For DAS we improve the asymptotic scaling of the Trotter error with the total number of gates $M$ from $\mathcal{O}(M^{-1})$ to $\mathcal{O}(M^{-2})$, and for any fixed circuit depth we calculate an approximately optimal step size that balances the error contributions from Trotterization and the adiabatic approximation. These results partially generalize to diabatic processes, which remain in a narrow energy band separated from the rest of the spectrum by a gap, thereby contributing to the explanation of the observed similarities between the quantum approximate optimization algorithm and diabatic quantum annealing at small system sizes.

85. Tomographic construction and prediction of superconducting qubit dynamics using the post-Markovian master equation

Haimeng Zhang, University of Southern California

Abstract. Non-Markovian noise presents a particularly relevant challenge in understanding and combating decoherence in quantum computers. Using tomographic constructed state dynamics of a superconducting qubit system, we show that we can construct a phenomenological dynamical model using the post-Markovian master equation (PMME). We experimentally test our protocol to characterize the free evolution of a single qubit in one of IBMQ's cloud-based quantum processors. The resultant PMME model characterizes the cross-talk effect due to the neighboring qubits, the timescales of the qubit decoherence and dissipation process, and quantifies the degree of non-Markovianity of the system. We also demonstrate that the constructed PMME model can predict future qubit dynamics for an arbitrary single-qubit state better than the standard Lindblad model. Our model construction protocol requires sampling the qubit evolution at multiple time points for only one qubit initial state; thus, it requires less data than the process tomography and machine learning methods. The PMME has a closed-form analytical solution, making it straightforward to find the best-fit PMME model parameters via the maximum likelihood estimation method. Our protocol provides a robust estimation method for a continuous dynamical model beyond the commonly assumed Markovian approximation, leading to more accurate modeling of noisy intermediate-scale quantum (NISQ) devices.

86. Fermionic partial tomography via classical shadows

Andrew Zhao, University of New Mexico CQuIC

Abstract. We propose a tomographic protocol for estimating any k-body reduced density matrix (k-RDM) of a fermionic state, a ubiquitous step in near-term quantum algorithms for simulating many-body physics, chemistry, and materials. Our approach extends the framework of classical shadows, a randomized approach to learning a collection of quantum-state properties, to the fermionic setting. Our sampling protocol employs randomized measurements generated by a discrete group of fermionic Gaussian unitaries, implementable with linear-depth circuits, to achieve near-optimal scaling in the number of repeated state preparations required of fermionic RDM tomography. We also numerically demonstrate that our protocol offers a substantial improvement in constant overheads over prior state-of-the-art for estimating 2-, 3-, and 4-RDMs.

87. Noise-resistant Landau-Zener sweeps from geometrical curves

Fei Zhuang, Virginia Tech

Abstract. Landau-Zener theory is often exploited to generate quantum logic gates and to perform state initialization and readout. The quality of these operations can be degraded by noise fluctuations in the energy gap at the avoided crossing. We leverage a recently discovered correspondence between qubit evolution and space curves in three dimensions to design noise-robust Landau-Zener sweeps through an avoided crossing. In the case where the avoided crossing is purely noise-induced, we prove that operations based on monotonic sweeps cannot be robust to noise. Hence, we design families of phase gates based on non-monotonic drives that are error-robust up to second order. In the general case where there is an avoided crossing even in the absence of noise, we present a general technique for designing robust driving protocols that takes advantage of a relationship between the Landau-Zener problem and space curves of constant torsion.