Understanding classical and quantum information at large scales

Date
2020
DOI
Authors
Meng, Xiangyi
Version
OA Version
Citation
Abstract
This dissertation contributes to a better understanding of concepts, theories, and applications of classical and quantum information in various large-scale systems. The dissertation is structured in two parts: The first half is concerned with classical systems: First, we study a basic-yet-never-fully-appreciated property of many real-world complex networks---the scale-free (SF) property---which was often considered to be only represented by the degree distribution. We define a new fundamental quantity, however, the degree--degree distance, which can better represent the SF property by showing statistically more significant power laws and better explain the evolution of real-world networks, e.g., Wikipedia Webpages. Second, we study brain tractography of a healthy subject by diffusion-weighted magnetic resonance imaging (dMRI) data and find that the dependence of dMRI signal on the interpulse time can decode the smaller-than-resolution brain structure and might unravel how information transmits. This finding is confirmed by Monte-Carlo simulation of water-molecule diffusion which let us understand how to optimally measure the thickness of axon sheets in the brain. The second half is concerned with quantum systems: Our first work is to understand how to establish long-distance entanglement transmission in a quantum network where each link has non-zero concurrence---a measure of bipartite entanglement. We introduce a fundamental statistical theory, concurrence percolation theory (ConPT), and find the existence of an entanglement transmission threshold predicted by ConPT which is lower than the known classical-percolation-based results---a “quantum advantage” that is more general and efficient than expected. ConPT also shows a percolation-like universal critical behavior derived by finite-size analysis. Our second work is to study continuous-time quantum walk as an open system that strongly interacts with the environment where non-Markovianity may significantly speed up the dynamics. We confirm this speed-up by first introducing a general multi-scale perturbation method that works on integro-differential equations and then building the Hamiltonian on regular networks, e.g., star or complete graphs, which can be mapped to an error correction algorithm scheme of practical significance. Our third work explores the possible use of entanglement entropy (EE) in machine-learning fields. We introduce a new long-short-term-memory-based recurrent neural network architecture using tensorization techniques to forecast chaotic time series, the learnability of which is determined not only by the number of free parameters but also the tensorization complexity---recognized as how EE scales.
Description
License