Pre-optimization of quantum circuits, barren plateaus and classical simulability: tensor networks to unlock the variational quantum eigensolver
2602.04676 | Wed Feb 04 2026 | quant-ph | PDF
Pre-optimization of quantum circuits, barren plateaus and classical simulability: tensor networks to unlock the variational quantum eigensolver
2602.04676 | Wed Feb 04 2026 | quant-ph | PDF
Variational quantum algorithms are practical approaches to prepare ground states, but their potential for quantum advantage remains unclear. Here, we use differentiable 2D tensor networks (TN) to optimize parameterized quantum circuits that prepare the ground state of the transverse field Ising model (TFIM). Our method enables the preparation of states with high energy accuracy, even for large systems beyond 1D. We show that TN pre-optimization can mitigate the barren plateau issue by giving access to enhanced gradient zones that do not shrink exponentially with system size. We evaluate the classical simulation cost evaluating energies at these warm-starts, and identify regimes where quantum hardware offers better scaling than TN simulations.
Benchmarking Quantum and Classical Algorithms for the 1D Burgers Equation: QTN, HSE, and PINN
2602.03925 | Wed Feb 04 2026 | quant-ph | PDF
Benchmarking Quantum and Classical Algorithms for the 1D Burgers Equation: QTN, HSE, and PINN
2602.03925 | Wed Feb 04 2026 | quant-ph | PDF
We present a comparative benchmark of Quantum Tensor Networks (QTN), the Hydrodynamic Schrödinger Equation (HSE), and Physics-Informed Neural Networks (PINN) for simulating the 1D Burgers' equation. Evaluating these emerging paradigms against classical GMRES and Spectral baselines, we analyse solution accuracy, runtime scaling, and resource overhead across grid resolutions ranging from to . Our results reveal a distinct performance hierarchy. The QTN solver achieves superior precision () with remarkable near-constant runtime scaling, effectively leveraging entanglement compression to capture shock fronts. In contrast, while the Finite-Difference HSE implementation remains robust, the Spectral HSE method suffers catastrophic numerical instability at high resolutions, diverging significantly at . PINNs demonstrate flexibility as mesh-free solvers but stall at lower accuracy tiers (), limited by spectral bias compared to grid-based methods. Ultimately, while quantum methods offer novel representational advantages for low-resolution fluid dynamics, this study confirms they currently yield no computational advantage over classical solvers without fault tolerance or significant algorithmic breakthroughs in handling non-linear feedback.
Approximate simulation of complex quantum circuits using sparse tensors
2602.04239 | Tue Feb 03 2026 | quant-ph | PDF
Approximate simulation of complex quantum circuits using sparse tensors
2602.04239 | Tue Feb 03 2026 | quant-ph | PDF
The study of quantum circuit simulation using classical computers is a key research topic that helps define the boundary of verifiable quantum advantage, solve quantum many-body problems, and inform development of quantum hardware and software. Tensor networks have become forefront mathematical tools for these tasks. Here we introduce a method to approximately simulate quantum circuits using sparsely-populated tensors. We describe a sparse tensor data structure that can represent quantum states with no underlying symmetry, and outline algorithms to efficiently contract and truncate these tensors. We show that the data structure and contraction algorithm are efficient, leading to expected runtime scalings versus qubit number and circuit depth. Our results motivate future research in optimization of sparse tensor networks for quantum simulation.
Primary charge-4e superconductivity from doping a featureless Mott insulator
2602.04011 | Tue Feb 03 2026 | cond-mat.str-el cond-mat.mes-hall cond-mat.supr-con | PDF
Primary charge-4e superconductivity from doping a featureless Mott insulator
2602.04011 | Tue Feb 03 2026 | cond-mat.str-el cond-mat.mes-hall cond-mat.supr-con | PDF
Superconductivity is usually understood as a phase in which charge- Cooper pairs are condensed. Charge- superconductivity has largely been discussed as a vestigial order at finite temperature emerging from charge- states. Primary charge- superconducting phases at zero temperature remain scarce in both experiments and microscopic models. Here we argue that a doped featureless Mott insulator with symmetry provides a natural platform for primary charge- superconductivity, based on perturbative renormalization group arguments and group theoretic considerations. As a concrete realization, we construct a bilayer Hubbard model with tunable onsite and symmetries that exhibits a featureless Mott insulating phase at half filling. Its low energy physics is captured by a generalized ESD model, featuring an effective Hamiltonian that is purely kinetic within the constrained Hilbert space. Using density matrix renormalization group (DMRG) simulations, we find a primary charge- superconducting phase in the ESD model and a conventional primary charge- phase in the case. We further characterize the corresponding normal states and discuss the resulting finite temperature phase diagram.
Spin and Charge Conductivity in the Square Lattice Fermi-Hubbard Model
2602.03771 | Tue Feb 03 2026 | cond-mat.str-el cond-mat.quant-gas | PDF
Spin and Charge Conductivity in the Square Lattice Fermi-Hubbard Model
2602.03771 | Tue Feb 03 2026 | cond-mat.str-el cond-mat.quant-gas | PDF
Dynamical properties are notoriously difficult to compute in numerical treatments of the Fermi-Hubbard model, especially in two spatial dimensions. However, they are essential in providing us with insight into some of the most important and less well-understood phases of the model, such as the pseudogap and strange metal phases at relatively high temperatures, or unconventional superconductivity at lower temperatures, away from the commensurate filling. Here, we use the numerical linked-cluster expansions to compute spin and charge optical conductivities of the model at different temperatures and strong interaction strengths via the exact real-time-dependent correlation functions of the current operators. We mitigate systematic errors associated with having a limited access to the long-time behavior of the correlators by introducing fits and allowing for non-zero Drude weights when appropriate. We compare our results to available data from optical lattice experiments and find that the Drude contributions can account for the theory-experiment gap in the DC spin conductivity of the model at half filling in the strong-coupling region. Our method helps paint a more complete picture of the conductivity in the two-dimensional Hubbard model and opens the door to studying dynamical properties of quantum lattice models in the thermodynamic limit.
Calculating Feynman diagrams with matrix product states
2602.02665 | Tue Feb 03 2026 | cond-mat.str-el hep-th | PDF
Calculating Feynman diagrams with matrix product states
2602.02665 | Tue Feb 03 2026 | cond-mat.str-el hep-th | PDF
This text reviews, hopefully in a pedagogical manner, a series of work on the automatic calculations of Feynman diagrams in the context of quantum nanoelectronics (Keldysh formalism) with an application to the Kondo effect in the out-of-equilibrium single impurity Anderson model. It includes a discussion of (A) how to deal with the proliferation of diagrams, (B) how to calculate them using the Tensor Cross Interpolation algorithm instead of Monte-Carlo and (C) how to resum the obtained series. These notes correspond to a lecture given at the Autumn School on Correlated Electrons 2025 in Jullich, Germany. The book with all the lectures of the school (edited by Eva Pavarini, Erik Koch, Alexander Lichtenstein, and Dieter Vollhardt) is available in open access.
Compiling Quantum Regular Language States
2602.02698 | Mon Feb 02 2026 | quant-ph cs.FL | PDF
Compiling Quantum Regular Language States
2602.02698 | Mon Feb 02 2026 | quant-ph cs.FL | PDF
State preparation compilers for quantum computers typically sit at two extremes: general-purpose routines that treat the target as an opaque amplitude vector, and bespoke constructions for a handful of well-known state families. We ask whether a compiler can instead accept simple, structure-aware specifications while providing predictable resource guarantees. We answer this by designing and implementing a quantum state-preparation compiler for regular language states (RLS): uniform superpositions over bitstrings accepted by a regular description, and their complements. Users describe the target state via (i) a finite set of bitstrings, (ii) a regular expression, or (iii) a deterministic finite automaton (DFA), optionally with a complement flag. By translating the input to a DFA, minimizing it, and mapping it to an optimal matrix product state (MPS), the compiler obtains an intermediate representation (IR) that exposes and compresses hidden structure. The efficient DFA representation and minimization offloads expensive linear algebra computation in exchange of simpler automata manipulations. The combination of the regular-language frontend and this IR gives concise specifications not only for RLS but also for their complements that might otherwise require exponentially large state descriptions. This enables state preparation of an RLS or its complement with the same asymptotic resources and compile time. We outline two hardware-aware backends: SeqRLSP, which yields linear-depth, ancilla-free circuits for linear nearest-neighbor architectures via sequential generation, and TreeRLSP, which achieves logarithmic depth on all-to-all connectivity via a tree tensor network. We prove depth and gate-count bounds scaling with the system size and the state's maximal Schmidt rank, and we give explicit compile-time bounds that expose the benefit of our approach. We implement and evaluate the pipeline.
Approaching the Thermodynamic Limit with Neural-Network Quantum States
2602.03598 | Mon Feb 02 2026 | cond-mat.str-el cond-mat.dis-nn quant-ph | PDF
Approaching the Thermodynamic Limit with Neural-Network Quantum States
2602.03598 | Mon Feb 02 2026 | cond-mat.str-el cond-mat.dis-nn quant-ph | PDF
Accessing the thermodynamic-limit properties of strongly correlated quantum matter requires simulations on very large lattices, a regime that remains challenging for numerical methods, especially in frustrated two-dimensional systems. We introduce the Spatial Attention mechanism, a minimal and physically interpretable inductive bias for Neural-Network Quantum States, implemented as a single learned length scale within the Transformer architecture. This bias stabilizes large-scale optimization and enables access to thermodynamic-limit physics through highly accurate simulations on unprecedented system sizes within the Variational Monte Carlo framework. Applied to the spin- triangular-lattice Heisenberg antiferromagnet, our approach achieves state-of-the-art results on clusters of up to sites. The ability to simulate such large systems allows controlled finite-size scaling of energies and order parameters, enabling the extraction of experimentally relevant quantities such as spin-wave velocities and uniform susceptibilities. In turn, we find extrapolated thermodynamic limit energies systematically better than those obtained with tensor-network approaches such as iPEPS. The resulting magnetization is strongly renormalized, (about of the classical value), revealing that less accurate variational states systematically overestimate magnetic order. Analysis of the optimized wave function further suggests an intrinsically non-local sign structure, indicating that the sign problem cannot be removed by local basis transformations. We finally demonstrate the generality of the method by obtaining state-of-the-art energies for a - Heisenberg model on a square lattice, outperforming Residual Convolutional Neural Networks.
Sampling two-dimensional isometric tensor network states
2602.01981 | Mon Feb 02 2026 | quant-ph physics.comp-ph | PDF
Sampling two-dimensional isometric tensor network states
2602.01981 | Mon Feb 02 2026 | quant-ph physics.comp-ph | PDF
Sampling a quantum systems underlying probability distributions is an important computational task, e.g., for quantum advantage experiments and quantum Monte Carlo algorithms. Tensor networks are an invaluable tool for efficiently representing states of large quantum systems with limited entanglement. Algorithms for sampling one-dimensional (1D) tensor networks are well-established and utilized in several 1D tensor network methods. In this paper we introduce two novel sampling algorithms for two-dimensional (2D) isometric tensor network states (isoTNS) that can be viewed as extensions of algorithms for 1D tensor networks. The first algorithm we propose performs independent sampling and yields a single configuration together with its associated probability. The second algorithm employs a greedy search strategy to identify K high-probability configurations and their corresponding probabilities. Numerical results demonstrate the effectiveness of these algorithms across quantum states with varying entanglement and system size.
Optimizing Tensor Train Decomposition in DNNs for RISC-V Architectures Using Design Space Exploration and Compiler Optimizations
2602.00555 | Mon Feb 02 2026 | cs.LG cs.AI cs.AR cs.MS | PDF
Optimizing Tensor Train Decomposition in DNNs for RISC-V Architectures Using Design Space Exploration and Compiler Optimizations
2602.00555 | Mon Feb 02 2026 | cs.LG cs.AI cs.AR cs.MS | PDF
Deep neural networks (DNNs) have become indispensable in many real-life applications like natural language processing, and autonomous systems. However, deploying DNNs on resource-constrained devices, e.g., in RISC-V platforms, remains challenging due to the high computational and memory demands of fully connected (FC) layers, which dominate resource consumption. Low-rank factorization (LRF) offers an effective approach to compressing FC layers, but the vast design space of LRF solutions involves complex trade-offs among FLOPs, memory size, inference time, and accuracy, making the LRF process complex and time-consuming. This paper introduces an end-to-end LRF design space exploration methodology and a specialized design tool for optimizing FC layers on RISC-V processors. Using Tensor Train Decomposition (TTD) offered by TensorFlow T3F library, the proposed work prunes the LRF design space by excluding first, inefficient decomposition shapes and second, solutions with poor inference performance on RISC-V architectures. Compiler optimizations are then applied to enhance custom T3F layer performance, minimizing inference time and boosting computational efficiency. On average, our TT-decomposed layers run 3x faster than IREE and 8x faster than Pluto on the same compressed model. This work provides an efficient solution for deploying DNNs on edge and embedded devices powered by RISC-V architectures.
Published in ACM Transactions on Embedded Computing Systems 24, 6, Article 171 (October 2025), 34 pages