2020 Talk Abstracts

Quantum illumination, illuminated

Presenting Author: Rafael Alexander, University of New Mexico CQuIC
Contributing Author(s): Carlton M. Caves

Abstract: Technology based on the use of single-mode squeezed light is now being used to enhance detection of gravitational waves in the LIGO/VIRGO interferometers. A recent key idea in quantum metrology is quantum illumination [S. Lloyd, “Enhanced sensitivity of photodetection via quantum illumination,” Science 321, 1463 (2008)], in which the use of two-mode squeezed (quantum-entangled) light is used to detect the presence of a target with fundamentally better precision than any strategy using unentangled light. Though this fundamental improvement was shown theoretically over a decade ago, its translation into a practical real-world scenario is still an unsolved challenge due to a combination of both the fundamental difficulty of performing a first-principles analysis and the technical limitations set by the requirements of existing technology. We address the first of these issues and shed light on the second. We do this by investigating the role of interferometric symmetries. By framing the problem in terms of the natural SU(1,1) symmetry of two-mode squeezed light, we identify previously unstudied symmetry sectors'' for this problem. Within each sector, the output state is entangled and can be modelled as a single (nonlocal) qubit; the entanglement (coherence) within these qubit sectors is responsible for the detection enhancement in quantum illumination.


State tomography with photon counting after a beam splitter

Presenting Author: Arik Avagyan, National Institute of Standards and Technology, Boulder
Contributing Author(s): Hilma Vasconcelos, Scott Glancy, Emanuel Knill

Quantum optics offers several proposed ways of achieving a scalable quantum computer. In order to characterize such a computer one needs to be able to perform state tomography on quantum states of light. A popular tomographic procedure, called homodyne detection, uses a strong coherent state, called the local oscillator (LO), which interferes on a beam splitter with the unknown state. The output beams are measured by photodiodes whose signals are subtracted and normalized. By changing the LO phase, it is possible to infer the optical state in the mode matching the LO. In this work we determine what can also be learned about the contents of the modes not matching the LO by counting photons in one or both outgoing paths after the beam splitter, keeping the local oscillator mode fixed but choosing its phase and amplitude. We prove that given the probabilities of photon counts of just one of the counters as a function of LO amplitude, it is possible to determine the content of the unknown optical state in the mode matching the LO mode conditional on each number of photons in orthogonal modes on the same path. If the unknown optical state has at most n photons, we determine finite sets of LO amplitudes sufficient for inferring the state. Such a setup thus allows a more extensive characterization of the quantum state of light as compared to the standard homodyne tomography.


Exploring electric-field noise mechanisms through treatments of an ion trap surface

Presenting Author: Maya Berlin-Udi, University of California Berkeley
Contributing Author(s): Clemens Matthiesen, Alberto Alonso, Crystal Noel, Peter T. Lloyd, Vincenzo Lordi, Hartmut Häffner

Electric-field noise is a major limiting factor in the performance and scalability of ion traps. Despite intensive research over the past decade, the microscopic mechanism underlying electric-field noise near surfaces is unknown. We use a single trapped ion as a detector to measure noise at megahertz frequencies, and we find that our measurements are consistent with noise produced by an ensemble of thermally activated fluctuators. We alter the surface with treatments such as prolonged heating and argon ion bombardment, and monitor changes in surface composition and electric field noise in response to these treatments. With these experiments, we establish a concrete link between electric-field noise characteristics and microscopic properties of the ion trap surface.


Quantum supremacy using a programmable superconducting processor

Presenting Author : Sergio Boixo, Google

In this talk I will cover the theoretical aspects of the quantum supremacy experiment at Google, such as: complexity theory foundation, cross entropy benchmarking, statistical analysis and classical simulation algorithms.


Advances in surface code quantum computation

Presenting Author: Benjamin Brown, University of Sydney

The surface code has emerged as one of the leading candidate quantum error-correcting codes to maintain the logical qubits of the first generation of scalable quantum computers. In this talk I will summarise my recent results that may alleviate some of the issues towards the development of a quantum computer based on a surface code architecture. I will first discuss work showing how to complete a universal set of fault-tolerant logical gates with the surface code without the need for distillation methods. This includes work showing how we can braid Majorana modes lying at the corners of the planar code to realise Clifford operations. I will also explain how the surface code can be connected to its three-dimensional generalisation to realise a non-Clifford gate using a two-dimensional system. Both of these proposals offer new routes to reduce the resource cost of scalable quantum computation as they circumvent the need for conventional distillation methods to complete a universal set of logic gates. Finally, I will briefly mention new results where we show that we can increase the threshold of a tailored variant of the surface code by specialising the decoder to deal with the common situation where the noise each qubit experiences is biased towards dephasing. This development means that the surface code can tolerate a higher rate of noise such that it is more readily constructed in the laboratory with modern hardware.


Coherent state phase estimation based on adaptive photon counting

Presenting Author: Matthew DiMario, University of New Mexico CQuIC
Contributing Author(s): Francisco Elohim Becerra

Optical interferometric measurements are an essential tool in many areas in physics, employing a single mode of light to extract information about the properties of a physical system. Coherent states of light have proven to be very convenient states in such measurements, specifically mapping this information into the phase of these states. The difficulty however, is extracting this information with minimal uncertainty, especially in a single-shot measurement. The Cramer-Rao lower bound (CRLB) is the fundamental limit for this uncertainty, which bounds the estimator variance through the quantum Fisher information for coherent states. A physically realizable single-shot measurement strategy that reaches this limit of precision, or even outperforms an ideal heterodyne detection given by twice the CRLB, has yet to be experimentally demonstrated. To this end, we propose and implement a single-shot measurement for phase estimation of coherent states based on coherent displacement operations, single photon counting, and fast feedback. Our demonstration surpasses the limit of an ideal heterodyne measurement without correcting for detection efficiency in our implementation. This superior performance is achieved by real-time optimization of the displacement operation conditioned on the detection history as the measurement progresses. For this optimization, we show that the use of different objective functions yields similar results, which outperform an ideal heterodyne measurement.


Peering into the anneal process of a quantum annealer

Presenting Author: Hristo Djidjev, Los Alamos National Laboratory
Contributing Author(s): Elijah Pelofske, Georg Hahn

To solve an optimization problem using a commercial quantum annealer, one has to represent the problem of interest as an Ising or QUBO (quadratic unconstrained binary optimization) problem and submit its coefficients to the annealer, which then returns a user-specified number of low-energy solutions. It will be useful to know what happens in the quantum processor during the anneal process so that one could design better algorithms or suggest improvements to the hardware. However, existing quantum annealers are not able to directly extract such information from the processor. Hence, in this work we propose to use advanced features of the newest annealer generation, the D-Wave 2000Q (DW2KQ), to indirectly infer information about the anneal process evolution. Specifically, DW2KQ allows users to customize the anneal schedule, that is, the schedule with which the anneal fraction is changed from the start to the end of the anneal. Using this feature, we design a set of modified anneal schedules whose outputs can be used to generate information about the states of the system at equally spaced time points during a standard anneal of Q. With this process, called slicing, we obtain approximate distributions of lowest-energy anneal solutions as the anneal time evolves. We use our technique to obtain a variety of insights into DW2KQ such as when individual bits in an evolving solution flip during the anneal process and when they stabilize, and the freeze-out points for individual qubits.

Read this article online: https://arxiv.org/abs/1908.02691


Full reconstruction of all correlated errors in large-scale quantum computers

Presenting Author: Joseph Emerson, University of Waterloo, Quantum Benchmark Inc.
Contributing Author(s): Joel Wallman, Dar Dahlen, Ian Hincks

In this talk I will describe how the cycle benchmarking protocol enables teams to identify all errors and error correlations for any gate-combination of interest. I will provide experimental data from multi-qubit superconducting qubit and ion trap quantum computers, revealing that: (1) in leading platforms, cross-talk and other error correlations can be much more severe than expected, even many order-of-magnitude larger than expected based on independent error models; these cross-talk errors can induce errors on other qubits (e.g., idling qubits) that are an order of magnitude larger than the errors on the qubits in the domain of the gate operation; and thus (3) elementary gate error assessments such as standard and interleaved randomized benchmarking (RB) and gate-set tomography (GST) can give a very misleading picture of the actual "in vivo" errors limiting the performance of a large-scale circuit. I will then discuss how the aggregate error rates measured under cycle benchmarking can be applied to provide a bound on the accuracy of families of circuits implemented via randomized compiling. This will be an important tool for benchmarking algorithms and the quantum discovery regime', viz., the regime of large-scale quantum computations that lay beyond the horizon of classical digital computation, where cross-entropy benchmarking and quantum volume can no longer be measured.


Charge state instabilities in shallow NV centers for quantum sensing

Presenting Author: Mattias Fitzpatrick, Princeton University
Contributing Author(s): Zhiyang Yuan, Nathalie P. de Leon

Nitrogen Vacancy (NV) centers in diamond are a promising platform for nanoscale sensing, quantum information processing, and quantum networks. For most sensing applications, due to the decay of target signal outside the diamond, it is advantageous to have NV centers as close as possible to the diamond surface. However, it has been observed that for shallow NV centers, the measurement contrast for Rabi experiments and optically detected magnetic resonance (ODMR) is worse than that for bulk NV centers. Here we demonstrate that the degradation of shallow NV centers Rabi and ODMR contrasts can be associated with dynamics between the two charge states of the NV center (NV$^0$ and NV$^-$). We validate this claim by comparing two distinctly different diamond samples with shallow NVs, one which has charge-state stable NVs and the other with measurably less stable NVs. We measure NV spectra to compare the equilibrium charge state population for NVs in these two samples. Charge state conversion measurements are performed to extract the ionization and recombination rates in the dark and under both green (\unit{532}{\nano\meter}) and orange laser (\unit{590}{\nano\meter}) illumination. Finally, to understand how the charge state population and dynamics are influencing the ESR contrasts, we use time-resolved measurement of NV fluorescence and develop a model of the spin states of NV$^-$ and the NV$^0$ charge state. By fitting the measured fluorescence as function of time to our model, we deduce that the primary cause for lower ESR and Rabi contrast is an increase in the NV$^0$ population and rapid spin-nonconserving charge state conversion at higher laser powers. This research was supported by an appointment to the Intelligence Community Postdoctoral Research Fellowship Program at the Princeton University by Oak Ridge Institute for Science and Education (ORISE) through an interagency agreement between the U.S. Department of Energy and the Office of the Director of National Intelligence (ODNI).


Efficient learning of quantum noise

Presenting Author: Steven Flammia, University of Sydney

Noise is the central obstacle to building large-scale quantum computers. Quantum systems with sufficiently uncorrelated and weak noise could be used to solve computational problems that are intractable with current digital computers. There has been substantial progress towards engineering such systems. However, continued progress depends on the ability to characterize quantum noise reliably and efficiently with high precision. Here we introduce a protocol that completely and efficiently characterizes the error rates of quantum noise and we experimentally implement it on a 14-qubit superconducting quantum architecture. The method returns an estimate of the effective noise with relative precision and detects all correlated errors. We show how to construct a quantum noise correlation matrix allowing the easy visualization of all pairwise correlated errors, enabling the discovery of long-range two-qubit correlations in the 14 qubit device that had not previously been detected. These properties of the protocol make it exceptionally well suited for high-precision noise metrology in quantum information processors. Our results are the first implementation of a provably rigorous, full diagnostic protocol capable of being run on state of the art devices and beyond. These results pave the way for noise metrology in next-generation quantum devices, calibration in the presence of crosstalk, bespoke quantum error-correcting codes, and customized fault-tolerance protocols that can greatly reduce the overhead in a quantum computation.


Matrix product state simulations on a quantum computer

Presenting Author: Michael Foss-Feig, Honeywell
Contributing Author(s): Andrew Potter, David Hayes

Matrix product states (MPS) afford a compressed representation of many states that are relevant to physical systems. While numerous classical algorithms have been developed to compute the properties of physical systems using MPS as an ansatz, in many cases of practical interest these algorithms still require exponential resources (for example in the size of the system, or in the evolution time when out of equilibrium). We discuss near-term prospects for using small and non-error-corrected quantum computers to aid in MPS simulations.


On the classical hardness of spoofing linear cross-entropy benchmarking

Presenting Author: Sam Gunn, University of Texas, Austin
Contributing Author(s): Scott Aaronson

Recently, Google announced the first demonstration of quantum computational supremacy with a programmable superconducting processor. Their demonstration is based on collecting samples from the output distribution of a noisy random quantum circuit, then applying a statistical test to those samples called Linear Cross-Entropy Benchmarking (Linear XEB). This raises a theoretical question: how hard is it for a classical computer to spoof the results of the Linear XEB test? In this short note, we adapt an analysis of Aaronson and Chen [2017] to prove a conditional hardness result for Linear XEB spoofing. Specifically, we show that the problem is classically hard, assuming that there is no efficient classical algorithm that, given a random n-qubit quantum circuit C, estimates the probability of C outputting a specific output string, say 0^n, with variance even slightly better than that of the trivial estimator that always estimates 1/2^n. Our result automatically encompasses the case of noisy circuits.

Read this article online: https://arxiv.org/pdf/1910.12085.pdf


Logical cooling for robust analogue quantum simulation

Presenting Author: Craig Hogle, Sandia National Laboratories

Contributing Author(s): Peter Maunz, Jaimie S. Stephens, Kevin Young, Robin Blume-Kohout, Daniel Stick, Susan M. Clark

Analogue quantum simulation is arguably the most promising near-term application of quantum computing. However, it is unknown how noise may limit analogue simulators’ computational power. Using a technique to remove errors in the computational basis of the system, without resorting to a full error correcting scheme, we look to both measure and increase an analogue quantum simulator’s robustness to noise, using a chain of trapped ions in a state-of-the-art microfabricated surface electrode trap. SNL is managed and operated by NTESS, LLC, a subsidiary of Honeywell International, Inc., for the US DOE NNSA under contract DE-NA0003525.The views expressed here do not necessarily represent the views of the DOE or the U.S. Government. SAND2019-13560 A


Encoded logical qubits and reservoir engineering in a trapped-ion mechanical oscillator

Presenting Author: Jonathan Home, ETH, Zurich

I will describe experiments in which measurements of modular functions of position and momentum can be used to encode and manipulate information stored in an oscillator. For trapped-ions, such measurements can be implemented using a single bichromatic coupling between the oscillator and an ancilla spin. This has allowed us to encode and manipulate a logical qubit encoded in grid states of the motional oscillator of a single trapped-ion, which were first proposed by Gottesmann, Kitaev and Preskill. More recently, we have developed a scheme for performing autonomous stabilization of the qubit subspace, replacing the spin measurement with a coherent operation followed by spin relaxation using optical pumping. This scheme represents a form of reservoir engineering, and also results in an efficient form of laser cooling. In addition to work on single oscillators, I will briefly review recent experimental results which target scaling trapped-ion quantum computers into multiple dimensions using arrays of micro-fabricated Penning traps.


Variational preparation of quantum Hall states on a lattice

Presenting Author: Eric Jones, Colorado School of Mines
Contributing Author(s): Eliot Kapit

Simulation of many-body quantum systems is one of the most promising applications of near-term quantum computers. The fractional quantum Hall states display fascinating many-body physics such as topological order and strong correlations and so are interesting candidates for quantum simulation experiments. We classically diagonalize for the low-energy spectrum of the Kapit-Mueller Hamiltonian for hardcore bosons on a lattice. The Laughlin state is an exact ground state of this long-range Hamiltonian for appropriate magnetic flux densities. In addition, we study the low-lying spectrum of a shorter-range proxy Hamiltonian and tune its hopping and interaction parameters in order to optimize the associated topological degeneracy and many-body gap. We then demonstrate a scheme for variational preparation of the Laughlin state on the lattice through a Trotterization of adiabatic state preparation with defect-pinned particles as the reference state. Such calculations suggest a way forward in the simulation of fractional quantum Hall states on quantum hardware.


Trapping and manipulating 2D Coulomb crystals for quantum information processing

Presenting Author: Alex Kato, University of Washington
Contributing Author(s): Megan Ivory, Jennifer Lilieholm, Liudmila Zhukas, Xiayu Linpeng, Vasilis Niaouris, Maria Viitaniemi, Kai-Mei Fu, Boris Blinov

Quantum computation in trapped ions is most commonly performed in linear Paul traps, where excess micromotion can be minimized. 2D coulomb crystals pose several advantages that may enhance the scalability of trapped ion systems and enable more fault tolerant computation schemes. Yet RF ion trap geometries that are capable of trapping 2D crystals inevitably lead to significant micromotion in ions away from the trap center, causing infidelities in qubit operations. Our trap is designed to strongly confine ions to the crystal plane, where transverse micromotion can be minimized and addressing laser beams experience no significant doppler shift. While excess planar micromotion will be present, we seek to demonstrate recently proposed methods that only use transverse modes to apply a spin dependent force to neighbouring ions, while accounting for in-plane micromotion through a series of segmented pulses to achieve high fidelity two qubit operations. We discuss our current progress in trapping and manipulating 2D ion crystals. In addition, we discuss beginning efforts to entangle a single trapped ion to a solid-state zinc oxide defect via direct photonic link.


Multipartite entanglement and secret key distribution in quantum networks

Presenting Author: Eneet Kaur, Louisiana State University
Contributing Author(s): Masahiro Takeoka, Mark M. Wilde, Wojciech Roga

Distribution and distillation of entanglement over a quantum network is an important task in quantum information theory. A fundamental question is to determine the ultimate performance of entanglement distribution over a given network. Although this question has been extensively explored for bipartite entanglement scenarios, less is known about multipartite entanglement distribution. Here we establish the fundamental limit on distributing multipartite entanglement, in the form of GHZ states, over a quantum network. In particular, we determine the multipartite entanglement/secret key distribution capacity of a quantum network in which the nodes are connected by lossy bosonic quantum channels, which corresponds to a practical quantum network consisting of optical links. Our result is also applicable to the distribution of multipartite secret keys, known as a common key in the quantum network scenario. These results set benchmarks for designing a network topology, as well as for network quantum repeaters for efficient GHZ state/common key distribution. Our result follows from an upper bound on distillable GHZ entanglement introduced here, called the "recursive-cut-and-merge" bound and which constitutes major progress on a longstanding fundamental problem in multipartite entanglement theory. This bound allows us to determine the exact distillable GHZ entanglement for a class of states consisting of products of bipartite pure states.


Scaling up quantum chemistry simulations using density matrix embedding theory

Presenting Author: Yukio Kawashima, 1QB Information Technologies

The simulation of large molecules using quantum computing is promising. The computational resources required scale only polynomially against molecular size, but do so exponentially when using classical computers. Quantum computing remains limited in computational resources, so computational costs must be reduced. Problem decomposition (PD) techniques are powerful tools able to reduce computational costs in maintaining accuracy in quantum chemistry simulations. The application of PD techniques shows promise in helping to scale up the ability to simulate larger molecules. We have developed QEMIST (Quantum-Enabled Molecular ab Initio Simulation Toolkit), a platform for both the classical and quantum simulation of large molecules, employing PD techniques. One PD technique implemented in QEMIST is the density matrix embedding theory (DMET). Its use involves decomposing a molecule into fragments, each fragment treated as an open quantum system entangled with each other fragment, all taken together to constitute a fragment's surrounding environment. We created an interface using DMET and quantum algorithms to perform quantum chemistry simulations. DMET-based simulation on a ring of 10 hydrogen atoms reduced the required number of qubits from 20 to 4; the error of calculated molecular energy was within 1.0 kcal/mol compared to the exact value. Employing the DMET method improved our ability to simulate larger molecules than conventional simulation techniques not making use of PD.


Practical figures of merit and thresholds for entanglement distribution in quantum networks

Presenting Author: Sumeet Khatri, Louisiana State University
Contributing Author(s): Corey T. Matyas, Aliza U. Siddiqui, Jonathan P. Dowling

Before global-scale quantum networks become operational, it is important to consider how to evaluate their performance so that they can be built to achieve the desired performance. We propose two practical figures of merit for the performance of a quantum network: the average connection time and the average largest entanglement cluster size. These quantities are based on the generation of elementary links in a quantum network, which is a crucial initial requirement that must be met before any long-range entanglement distribution can be achieved and is inherently probabilistic with current implementations. We obtain bounds on these figures of merit for a particular class of quantum repeater protocols consisting of repeat-until-success elementary link generation followed by joining measurements at intermediate nodes that extend the entanglement range. Our results lead to requirements on quantum memory coherence times, requirements on repeater chain lengths in order to surpass the repeaterless rate limit, and requirements on other aspects of quantum network implementations. These requirements are based solely on the inherently probabilistic nature of elementary link generation in quantum networks, and they apply to networks with arbitrary topology.

Read this article online: https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.1.023032, https://arxiv.org/abs/1905.06881


Engineering a practical quantum computer

Presenting Author: Jungsang Kim, Duke University

Trapped ions is one of the leading candidates for realizing practically useful quantum computers. Introduction of advanced integration technologies to this traditional atomic physics research has provided an opportunity to convert a complex atomic physics experiment into a stand-alone programmable quantum computer. In this presentation, I will discuss the new enabling technologies that changes the perception of a trapped ion system as a scalable quantum computer, and the concrete progress made to date in this endeavor.


Lower bounds on the non-Clifford resources for quantum computations

Presenting Author: Vadym Kliuchnikov, Microsoft Research
Contributing Author(s): Michael Beverland, Earl Campbell, Mark Howard

We establish lower-bounds on the number of resource states, also known as magic states, needed to perform various quantum computing tasks, treating stabilizer operations as free. Our bounds apply to adaptive computations using measurements and an arbitrary number of stabilizer ancillas. We consider (1) resource state conversion, (2) single-qubit unitary synthesis, and (3) computational tasks. To prove our resource conversion bounds we introduce two new monotones, the stabilizer nullity and the dyadic monotone, and make use of the already-known stabilizer extent. We consider conversions that borrow resource states, known as catalyst states, and return them at the end of the algorithm. We show that catalysis is necessary for many conversions and introduce new catalytic conversions, some of which are close to optimal. By finding a canonical form for post-selected stabilizer computations, we show that approximating a single-qubit unitary to within diamond-norm precision ε requires at least 1/7⋅log₂(1/ε)−4/3 T-states on average. This is the first lower bound that applies to synthesis protocols using fall-back, mixing techniques, and where the number of ancillas used can depend on ε. Up to multiplicative factors, we optimally lower bound the number of T or CCZ states needed to implement the ubiquitous modular adder and multiply-controlled-Z operations. When the probability of Pauli measurement outcomes is 1/2, some of our bounds become tight to within a small additive constant.

Read this article online: https://arxiv.org/pdf/1904.01124.pdf


Band engineering for quantum simulation in circuit QED

Presenting Author: Alicia Kollar, University of Maryland, College Park

The field of circuit QED has emerged as a rich platform for both quantum com- putation and quantum simulation. Lattices of coplanar waveguide (CPW) resonators realize artificial photonic materials in the tight-binding limit. Combined with strong qubit-photon interactions, these systems can be used to study dynamical phase tran- sitions, many-body phenomena, and spin models in driven-dissipative systems. I will show that these waveguide cavities are uniquely deformable and can produce lattices and networks which cannot readily be obtained in other systems, including periodic lattices in a hyperbolic space of constant negative curvature. Furthermore, I will show that the one-dimensional nature of CPW resonators leads to degenerate flat bands and that criteria for when they are gapped can be derived from graph-theoretic techniques. The resulting gapped flat-band lattices are difficult to realize in standard atomic crys- tallography, but readily realizable in superconducting circuits.


Native and Trotter errors during intermediate-depth quantum simulations on a small, highly accurate quantum processor

Presenting Author: Kevin Kuper, University of Arizona
Contributing Author(s): Nathan Lysne, Pablo Poggi, Ivan Deutsch, Poul Jessen

Noisy, intermediate-scale quantum (NISQ) devices are improving rapidly but remain far short of the requirements for fault tolerant computation. In the meantime, much of the effort in the field is focused on the development of analog quantum simulators that operate without error correction. We are currently exploring the capabilities and limitations of such devices, using as our test bed a small, highly accurate quantum (SHAQ) processor based on the combined electron-nuclear spin of a single Cs-133 atom in the electronic ground state. The Cs-atom based SHAQ Processor is controlled with rf and µw magnetic fields, is fully programmable in its accessible 16-dimensional Hilbert space, and provides for direct measurement of the fidelity of the evolving quantum state, as well as more conventional quantum simulation of the time evolution of observables such as magnetization. We have used this SHAQ processor to study the impact of both native and Trotter errors on such simulations, finding that “macroscopic” properties (magnetization) are quantitatively less sensitive to errors, compared to “microscopic” properties such as quantum state fidelities and survival properties. Lastly, we find that the balance between native and Trotter errors leads to an optimal point of operation where their joint effects are minimized.


Two-photon Fourier transform spectroscopy

Presenting Author: Tiemo Landes, University of Oregon
Contributing Author(s): Amr Tamimi, Jonathan Lavoie, Michael Raymer, Brian Smith, Andrew Marcus

We detail the various quantum pathways and interferences in a Mach-Zehnder interferometer resulting from insertion of time-frequency entangled photon pairs (EPP) into a single port of the interferometer. We then experimentally demonstrate two-photon coincidence Fourier transform spectroscopy and refractometry using frequency degenerate EPP centered around 532 nm generated via Type-I collinear spontaneous parametric down-conversion. The measurement is further improved by introducing phase-modulation in the interferometer via acousto-optic modulators enabling phase-sensitive measurement referenced to a helium-neon laser counter-propagating through the interferometer. The phase-sensitive measurement reduces the sampling requirement in path-delay space to fully reproduce the interference fringes, as well as minimizing environmental noise and the effect of interferometer drift. We demonstrate the technique using a lock-in amplifier and a discrete time- and phase-tagging technique developed for low-flux measurements.

Read this article online: https://arxiv.org/abs/1910.04202


Entanglement-enhanced interferometry with neutral atoms

Presenting Author: Bethany Little, Sandia National Laboratories
Contributing Author(s): Matthew Chow, Lambert Parazzoli, Jonathan Bainbridge, Grant Biedermann, Jongmin Lee, Brandon Ruzic, Constantin Brif, and Peter Schwindt

Precision measurement and inertial sensing applications have relied on neutral atoms for many years. Recent advancements in utilizing Rydberg dressing to mediate strong, tunable interactions between neutral alkali atoms suggest that neutral atoms now also provide a promising platform for quantum sensing. Members of our team have shown that entanglement can enhance the performance of an atom interferometer, since the measurement uncertainty will follow Heisenberg scaling. Following a recent demonstration of entanglement of two Cs atoms in optical dipole traps and building on expertise in atom interferometry, we report on progress toward implementation of an entanglement-enhanced atom interferometer, which has the capability to scale to many atoms in the current apparatus. We discuss the effects of various error sources on the fidelity and progress on overcoming critical experimental challenges, as we work towards making advanced quantum sensing with neutral atoms a reality. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.


Simulating quantum field theory in the light-front formulation

Presenting Author: Peter J. Love, Tufts University
Contributing Author(s): Michael Kreshchuk, William Kirby, Gary Goldstein, Hugo Beauchemin

We explore the possibility of simulating relativistic field theories in the light-front (LF) formulation and argue that such a framework has numerous advantages as compared to both lattice and second-quantized equal-time approaches. These include a small number of physical degrees of freedom leading to reduced resource requirements, efficient encoding with model-independent asymptotics, and sparse Hamiltonians. Many quantities of physical interest are naturally defined in the LF, resulting in simple measurements. The LF formulation allows one to trace the connection between relativistic field theories and quantum chemistry, thus permitting to use numerous techniques developed in the last decade. It also provides a promising application for NISQ devices, since for certain calculations one may need of an order of hundred qubits. As an example, we provide a detailed algorithm for calculating analogues of QCD parton distribution functions in a simple 1+1-dimensional model. We also discuss the generalization to QCD, and provide estimates.


Quantum supremacy using a programmable superconducting processor

Presenting Author: John Martinis, Google and University of California, Santa Barbara

The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2^53 (about 10^16). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.


Benchmarking near-term quantum computers

Presenting Author: Seth Merkel, IBM

As the field marches towards quantum advantage with near-term quantum processors, it becomes imperative to characterize, verify, and validate performance. An outstanding scientific challenge in the community is a scalable set of metrics or experiments which can shed light on the usability of a device for near-term algorithms. We propose a device-independent metric called the quantum volume and use it to characterize recent systems built at IBM. Moreover, it becomes critical to explore techniques to extend the computational reach of noisy systems, be it through understanding underlying physics, or more efficient circuit compilation.


Generation of high-fidelity Molmer-Sorenson interactions between neutral atoms using adiabatic Rydberg dressing

Presenting Author: Anupam Mitra, University of New Mexico CQuIC
Contributing Author(s): Michael J. Martin, Grant W. Biedermann, Alberto M. Marino, Pablo M. Poggi, Ivan H. Deutsch

Arrays of optically trapped neutral atoms interacting through the Rydberg dipole blockade mechanics is a promising platform for scalable quantum information processors including universal quantum computers, analog quantum simulators, and quantum sensors. Critical to the performance of these devices is high-fidelity two-qubit interactions. We show that by strongly dressing ground states with Rydberg states in the presence of the blockade we can implement high fidelity entangling interactions on clock-state qubits. This gate can be made robust to imperfections such as atomic thermal motion, laser inhomogeneities, and an imperfect Rydberg blockade. In particular, we show that the error in implementing a two-qubit entangling gate is dominated by errors in the single atom light shift, and this can be easily mitigated using adiabatic dressing, interleaved with a spin echo. This implements a two-qubit Mølmer-Sørenson gate. Current modest experimental parameters allow a gate fidelity of >99.5%, and a path to higher gate fidelities is achievable with stronger Rydberg blockade, longer Rydberg state lifetimes and larger Rabi frequencies. We also study the application of the Mølmer-Sørenson interaction to many-body systems with multiple qubits.

Read this article online: https://arxiv.org/abs/1911.04045


Quantum gravity in the lab: teleportation by size and traversable wormholes

Presenting Author: Sepehr Nezami, California Institute of Technology
Contributing Author(s): Adam Brown Hrant Gharibyan Stefan Leichenauer Henry Lin Grant Salton Leonard Susskind Brian Swingle Michael Walter

Traversable wormholes in holography exhibit a strange phenomenon: with the aid of a simple and weak coupling, any local signal inserted at time −t in one boundary system -followed by the dissipation caused by chaotic dynamics- reappears at time +t on the other boundary system. Inspired by the traversable wormholes, we propose teleportation experiments that can readily be performed in an atomic physics lab exhibiting similar behavior. We study this phenomenon when the entanglement between two systems is maximal (i.e., infinite temperature thermofield double (TFD) state) in various systems. We introduce the core information theoretic paradigm behind this phenomenon, which we call Teleportation by Size, to encapsulate how the physics of operator size growth naturally leads to transmission of a signal in many different scenarios. We argue that the infinite temperature phenomenon, although sharing the surprising properties, does not immediately correspond to a signal going through a wormhole. In fact, in the systems with gravitational dual, this corresponds to transmission of signal with the aid of vastly different geometries. Instead, we introduce a property of the growth distribution of operators called size winding, which only exists at low temperature, and show that it explains the boundary physics of the signals traversing in geometrical wormholes. We argue that an imperfect form of size winding -common in quantum systems- has an imprint on the fidelity of teleportation.


A universal quantum computer based on long chains of ions

Presenting Author: Crystal Noel, University of Maryland Joint Quantum Institute

We present the system design and architecture of a trapped ion universal quantum processor with high-fidelity quantum gates and addressing of up to 32 qubits. Our approach takes advantage of individual optical addressing to achieve simultaneous high-fidelity operations on a long chain of 171Yb+ ions, resulting in one of the largest academic general-purpose quantum computers. Under the IARPA Logical Qubit (LogiQ) program, we aim to demonstrate a logical qubit using the Bacon-Shor [[9,1,3]] subsystem code. The Bacon-Shor code consists of 9 data qubits, encoding 1 logical qubit, with stabilizer circuits mapped to 4 ancilla qubits capable of correcting any single qubit error. In this talk, we report on the experimental progress made towards implementation of quantum error correction, including the encoding of the logical qubit and stabilizer readout. Additionally, we report progress towards achieving multiple rounds of error correction using added capabilities of sympathetic cooling on long chains and individual ancilla readout.


Theory of robust multi-qubit non-adiabatic gates for trapped-ions

Presenting Author: Roee Ozeri, Weizmann Institute of Science
Contributing Author(s): Ravid Shaniv, Tom Manovitz, Nitzan Akerman, Lee Peleg, Lior Gazit, Yotam Shapira and Ady Stern

Entanglement gates are essential building blocks of quantum computers. Such two-qubit gates have been demonstrated in trapped-ions systems with outstanding fidelities. However retaining the gate’s performance in a large qubit register remains a major challenge in the realization of a quantum computer. Here we propose and investigate multi-qubit entanglement gates for trapped-ions in ion-chain configurations. Our gates purposefully utilize all the normal-modes of motion of the ion-chain allowing for the operation of our gates outside of the adiabatic regime. The coupling to the different normal-modes of motion is used to form all-to-all entangling gates, e.g gates that rotate the ground state to a GHZ state, and to generate spin-Hamiltonian interactions such as nearest-neighbor Ising model or the Su-Schriefer-Heeger topological Hamiltonian. Our gates use a multi-tone laser field, which couples uniformly to all ions, i.e there is no need to individually address the different ions. Thus, our method is simple to implement and natural to most trapped-ion architectures. Furthermore, we endow our gate with robustness properties, which make them resilient to various sources of system noise and imperfections.

Read this article online: http://arxiv.org/abs/1911.03073


Distance-independent rate for entanglement generation in a quantum network

Presenting Author: Ashlesha Patil, University of Arizona
Contributing Author(s): Mihir Pant, Don Towsley and Saikat Guha

Quantum repeaters, built with entangled-photon sources and heralded quantum memories, are connected via lossy links in a network topology. In every time slot, a Bell state is created across each link, among 2 qubits held in memories at either end, with probability p. A node can attempt an n-qubit measurement in a maximally-entangled (e.g., GHZ state) basis, which succeeds with probability q. When n=2, i.e., only Bell-state measurements are used, it was shown: (1) even with local link state (a node knows if its neighbouring links successfully created entanglement in a time slot), end-to-end entanglement rate exceeds routing along the shortest path; (2) but even with global link state information (success-failure outcomes for all links), the rate falls off exponentially with distance, for any. When n >=3, we present protocols for entanglement distribution, and a slightly simpler one for quantum key distribution, that affords a distance-independent rate, using only local link state, in a non-trivial region (i.e., both links and measurements can fail). For the entanglement distribution protocol, the end result is an n-qubit GHZ state between a set of users. When the network topology is a square-grid, for q=1, the threshold for which the above is true, is 0.62. This translates to about 10km of single-mode fiber assuming no other losses. The (p, q) thresholds vary with G and decrease as n increases. Extensions to serving multiple user groups remains ongoing work.


Limits to single photon detection: Amplification

Presenting Author: Tzula Propp, University of Oregon
Contributing Author(s): Steven J van Enk

We have constructed a model of photo detection that is both idealized and realistic enough to calculate the limits and tradeoffs inherent to single photon detector (SPD) figures of merit. This model consists of three parts: transmission, amplification, and measurement. In this talk, we discuss the effects of signal amplification post-filtering; by first writing correct commutator-preserving transformations for non-linear photon-number amplification (e.g. avalanche photodiode, electron-hole pair creation, electron shelving), we derive alternative noise limits that out- perform the well-known Caves limits for linear amplification of bosonic mode amplitudes and possess no zero-temperature noise contribution to boson number SNR. We then discuss the optimistic implications for single photon detection. Lastly, we briefly discuss the pre-amplification filtering process (transmission) along with the construction of POVMs completely describing photo detectors (measurement), from which one can calculate all standard SPD figures of merit.

Read this article online: https://www.osapublishing.org/oe/abstract.cfm?uri=oe-27-16-23454


Micron-scale superconducting wires for polarization insensitive, near-unity efficiency single-photon detection

Presenting Author: Dileep Reddy, National Institute of Standards and Technology, Boulder; University of Colorado, Boulder
Contributing Author(s): Jeff Chiles, Sonia M. Buckley, Adriana E. Lita, Varun B. Verma, Sae Woo Nam, Richard P. Mirin

Current-biased, superconducting-nanowire single-photon detectors are typically fabricated with wire-widths in the 100-200 nm range in order to ensure sensitivity to single-photon absorption at near-infrared energies. This has constrained the fill factors of nanowire meanders to conform to low values that limit current-crowding effects. It has also penalized large-area devices with large kinetic inductances and polarization-dependent efficiencies. Recent advances in silicon-rich WSi amorphous-superconducting film compositions have extended device sensitivities to mid-infrared photons. They have also enabled near-infrared sensitive devices with wire-widths in the 1-3 micron range, thus allowing for larger active areas with lower inductances, faster pulse-recovery times, and near-complete polarization independence. I will be presenting the design and measurement results for fiber-coupled superconducting microwire near-IR single-photon detectors that benefit from the aforementioned qualities, and boast system efficiencies exceeding 98%. I will also be presenting application extensions into imaging, and low-photon-number resolved detection.


Utilizing NISQ devices for evaluating quantum algorithms

Presenting Author: Eleanor Rieffel, NASA - Ames Research Center
Contributing Author(s): NASA QuAIL team

With the advent of quantum supremacy, we have an unprecedented opportunity to explore quantum algorithms in new ways. The emergence of general-purpose quantum processors opens up empirical exploration of quantum algorithms far beyond what has been possible to date. Challenging computational problems arising in the practical world are often tackled by heuristic algorithms. While heuristic algorithms work well in practice, by definition they have not been analytically proven to be the best approach or to outperform the best previous approach. Instead, heuristic algorithms are empirically tested on benchmark and real-world problems. With the empirical evaluation NISQ hardware enables, we expect a broadening of established applications of quantum computing. What to run and how best to utilize these still limited quantum devices to gain insight into quantum algorithms remain open research questions. We discuss opportunities and challenges for using NISQ devices to evaluate quantum algorithms, including in elucidating quantum mechanisms and their uses for quantum computational purposes, in the design of novel or refined quantum algorithms, in compilation, error-mitigation, and robust algorithms design, and in techniques for evaluating quantum algorithms empirically.


Demonstration of a large-scale quantum chemistry calculations using the Sycamore quantum processor

Presenting Author: Nicholas Rubin, Google

Variational simulation of quantum chemistry is a likely first application for noisy intermediate scale quantum (NISQ) computers in the post supremacy age. We simulate a chemistry model that is significantly larger than previous implementations on any quantum computing platform. The model is optimized through a variational outer loop and a new iterative method based on the generalized Brillouin stopping condition that allows for the use of an approximate Hessian. Upon application of an error mitigation scheme based on pure-state n-representability conditions our experiments run on Google’s Sycamore quantum processor achieve chemical accuracy. The model provides an efficiently verifiable circuit that has a large degree of entanglement and is a circuit primitive for fermionic simulation. More broadly, we demonstrate how this fermionic simulation circuit primitive can be used to benchmark large-scale devices.


Scalable quantum computing with neutral atoms

Presenting Author: Mark Saffman, University of Wisconsin-Madison

One of the daunting challenges in developing a practical quantum computer is the need to scale to a very large number of qubits. Neutral atoms are one of the most promising approaches for meeting this challenge. I will describe our recent results implementing quantum gates, including new adiabatic pulse sequences, in a large 2D array of atomic qubits.


Quantum simulation and computation with programmable Rydberg atom arrays

Presenting Author: Giulia Semeghini, Harvard University

Rydberg atom arrays have emerged in the past few years as a promising resource for quantum simulation and quantum information processing. The ability to produce arbitrary spatial arrangements of neutral atoms is combined with the coherent control of their internal states, including coupling to Rydberg states to achieve strong interactions, to create an extremely versatile platform. Recent experiments on 1D arrays have highlighted the potential of this system for high-fidelity quantum information processing, demonstrating two different techniques for entanglement engineering. I will present these results and report on the recent upgrade of our platform to control hundreds of atoms in arbitrary 2D geometries.


Variational fast forwarding for quantum simulation beyond the coherence time

Presenting Author: Andrew Sornborger, Los Alamos National Laboratory
Contributing Author(s): Cristina Cirstoiu, Zoe Holes, Joseph Iosue, Lukasz Cincio, Patrick Coles

Trotterization-based, iterative approaches to quantum simulation are restricted to simulation times less than the coherence time of the quantum computer, which limits their utility in the near term. Here, we present a hybrid quantum-classical algorithm, called Variational Fast Forwarding (VFF), for decreasing the quantum circuit depth of quantum simulations. VFF seeks an approximate diagonalization of a short-time simulation to enable longer-time simulations using a constant number of gates. Our error analysis provides two results: (1) the simulation error of VFF scales at worst linearly in the fast-forwarded simulation time, and (2) our cost function's operational meaning as an upper bound on average-case simulation error provides a natural termination condition for VFF. We implement VFF for the Hubbard, Ising, and Heisenberg models on a simulator. Finally, we implement VFF on Rigetti's quantum computer to show simulation beyond the coherence time.

Read this article online: https://arxiv.org/abs/1910.04292


Laser-free trapped-ion entangling gates

Presenting Author: Raghavendra Srinivas, National Institute of Standards and Technology, Boulder
Contributing Author(s): Shaun Burd, Robert Sutherland, Hannah Knaack, Dietrich Leibfried, David Wineland, Andrew Wilson, David Allcock, Daniel Slichter

Trapped-ion entangling gates are usually performed using laser-induced coupling of the ions’ internal spin states to their motion. Laser-free methods, which eliminate photon scattering errors and offer benefits for scalability have been proposed and demonstrated using static magnetic field gradients or magnetic field gradients oscillating at GHz frequencies [1-4]. We demonstrate a recently proposed method for trapped-ion entangling gates [5] implemented using an oscillating magnetic field gradient at radio frequency in addition to two microwave magnetic fields symmetrically detuned about the qubit frequency. This implementation offers important technical advantages over other laser-free techniques, while also enabling laser-free entangling gates with reduced sensitivity to qubit frequency errors. These experiments are performed in a surface-electrode trap that incorporates current-carrying electrodes to generate the microwave fields and the oscillating magnetic field gradient. Currently, we achieve a Bell-state fidelity of 0.996(2) with ground-state-cooled ions and 0.991(3) for ions cooled to the Doppler limit (nbar=2). [1] Mintert and Wunderlich PRL 87, 257904 (2001) [2] Weidt et al. PRL 117, 220501 (2016) [3] Ospelkaus et al. Nature 476, 181 (2011) [4] Harty et al. PRL 117, 140501 (2016) [5] Sutherland et al. NJP 21, 033033 (2019)


A hybrid quantum approximate optimization algorithm incorporating classical heuristics

Presenting Author: Jaimie S. Stephens, Sandia National Laboratories
Contributing Author(s): William Bolden; Ojas Parekh

The Quantum Approximate Optimization Algorithm (QAOA) (Farhi et. al. 2014) can approximately solve NP-hard problems. However, the performance of QAOA is not well understood, especially relative to problem-specific classical heuristics. We propose boosting the performance of QAOA by leveraging classical heuristics as black-box oracles in a generic and automatic way. This allows QAOA to benefit from improved classical algorithms as they are discovered. We replace the QAOA cost Hamiltonian with an implicit cost operator derived from a classical heuristic of choice, allowing QAOA to optimize over the output of a classical heuristic. Our approach also eliminates the need for specially designed mixing Hamiltonians for constrained problems. We demonstrate our hybrid QAOA on several discrete optimization problems using high-quality classical heuristics, including local search. We observe that: (i) the performance of our hybrid QAOA improves as the computational cost of local-search is increased, and (ii) our hybrid QAOA outperforms both QAOA and the selected classical heuristics on their own. Thus we offer a new means for QAOA to automatically benefit from classical advances. Sandia National Labs is managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a subsidiary of Honeywell International, Inc., for the U.S. DOE, National Nuclear Security Administration under contract DE-NA0003525.


Non-Markovianity of the post-Markovian master equation

Presenting Author: Chris Sutherland, University of Southern California
Contributing Author(s): Daniel Lidar Todd Brun

An easily solvable quantum master equation has long been sought that takes into account memory effects induced on the system by the bath, i.e., non-Markovian effects. We briefly review the post-Markovian master equation (PMME), which is relatively easy to solve, and analyze a simple example where solutions obtained exhibit non-Markovianity. We apply the distinguishability measure introduced by Breuer et al., and we also explicitly analyze the divisibility of the associated quantum dynamical maps. We give a mathematical condition on the memory kernel used in the PMME that guarantees non-CP-divisible dynamics.

Read this article online: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.042119


Phase-matched scattering from a reconfigurable array of trapped neutral atoms

Presenting Author: Hikaru Tamura, University of Michigan
Contributing Author(s): Huy Nguyen, Paul Berman, Alex Kuzmich

We investigate phase-matched scattering from arrays of cold atoms that are confined in optical tweezers in one- and two-dimensional geometries. For a linear chain, we observe phase-matched reflective scattering in a cone about the symmetry axis of the array that scales as the square of the number of atoms in the chain. For two linear chains of atoms, the phase-matched reflective scattering is enhanced or diminished as a result of Bragg scattering. Such scattering can be used for mapping collective states within an array of neutral atoms onto propagating light fields and for establishing quantum links between separated arrays.


Quantum control for quantum algorithm design

Presenting Author: Birgitta Whaley, University of California, Berkeley

Optimization is central to the explicit construction of many quantum algorithms. It plays a particularly important role in quantum algorithms characterized by heuristics, such as the Quantum Approximate Optimization Algorithm (QAOA) and hybrid quantum-classical algorithms, such as the variational quantum eigensolver approach to electronic structure calculations. It is also important in development of efficient quantum algorithms for quantum simulations. I shall describe the use of techniques from quantum control and optimization to enable co-design of quantum algorithms, presenting examples of robust design of QAOA protocols and explicit construction of quantum signal processing (QSP) protocols for Hamiltonian simulation and linear algebra.


Quantifying the incompatibility of quantum measurements Relative to a Basis

Presenting Author: Paolo Zanardi, University of Southern California
Contributing Author(s): Georgios Styliaris

Motivated by quantum resource theories, we introduce a notion of incompatibility for quantum measurements relative to a reference basis. The notion arises by considering states diagonal in that basis and investigating whether probability distributions associated with different quantum measurements can be converted into one another by probabilistic postprocessing. The induced preorder over quantum measurements is directly related to multivariate majorization and gives rise to families of monotones, i.e., scalar quantifiers that preserve the ordering. For the case of orthogonal measurement we establish a quantitative connection between incompatibility, quantum coherence and entropic uncertainty relations. We generalize the construction to include arbitrary positive-operator-valued measurements and report complete families of monotones

Read this article online: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.123.070401