SciPost logo

SciPost Submission Page

Generative Learning of Continuous Data by Tensor Networks

by Alex Meiburg, Jing Chen, Jacob Miller, Raphaëlle Tihon, Guillaume Rabusseau, Alejandro Perdomo-Ortiz

Submission summary

Authors (as registered SciPost users): Jing Chen
Submission information
Preprint Link: scipost_202404_00031v1  (pdf)
Date submitted: 2024-04-22 03:58
Submitted by: Chen, Jing
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • Condensed Matter Physics - Computational
  • Quantum Physics
Approaches: Theoretical, Computational

Abstract

Beyond their origin in modeling many-body quantum systems, tensor networks have emerged as a promising class of models for solving machine learning problems, notably in unsupervised generative learning. While possessing many desirable features arising from their quantum-inspired nature, tensor network generative models have previously been largely restricted to binary or categorical data, limiting their utility in real-world modeling problems. We overcome this by introducing a new family of tensor network generative models for continuous data, which are capable of learning from distributions containing continuous random variables. We develop our method in the setting of matrix product states, first deriving a universal expressivity theorem proving the ability of this model family to approximate any reasonably smooth probability density function with arbitrary precision. We then benchmark the performance of this model on several synthetic and real-world datasets, finding that the model learns and generalizes well on distributions of continuous and discrete variables. We develop methods for modeling different data domains, and introduce a trainable compression layer which is found to increase model performance given limited memory or computational resources. Overall, our methods give important theoretical and empirical evidence of the efficacy of quantum-inspired methods for the rapidly growing field of generative learning.

Author indications on fulfilling journal expectations

  • Provide a novel and synergetic link between different research areas.
  • Open a new pathway in an existing or a new research direction, with clear potential for multi-pronged follow-up work
  • Detail a groundbreaking theoretical/experimental/computational discovery
  • Present a breakthrough on a previously-identified and long-standing research stumbling block
Current status:
Awaiting resubmission

Reports on this Submission

Anonymous Report 3 on 2024-6-4 (Invited Report)

Strengths

1. Well and clearly written
2. Useful addition to the literature

Weaknesses

Compression layer could have been discussed in more detail

Report

The authors examine the use of tensor networks (TN) for machine learning with continuous data using a truncated feature map. While this has been proposed before, it has never been made practical. The main contribution of this paper is thus the introduction of the compression layer, which seems a valuable inovation of the feature map formalisme, and several theorems that provide a rigorous basis for the use of their proposed tensor network states. Various numerical illustrations and checks are provided, showing that indeed the approach is practicable.

Requested changes

At the bottom of page 2 the authors state that DMRG would allow them to grow/shrink the bond dimension dynamically, which they say is impossible with gradient descent.
I believe the authors here refer to a 2-site update, the most widely used method for dynamical bond changing. However, his can be done with both gradient descent and DMRG.
Additionally, DMRG (with eigenvalue problem updates for the tensors) is never used for machine learning, only stochastic gradient descent (which differs from DMRG only in the update of the tensor). One could consider this a kind of time evolution, so tDMRG would also be a valid name for it.
I only request this paragraph be changed accordingly

Recommendation

Publish (easily meets expectations and criteria for this Journal; among top 50%)

  • validity: top
  • significance: good
  • originality: high
  • clarity: top
  • formatting: perfect
  • grammar: perfect

Anonymous Report 2 on 2024-5-29 (Invited Report)

Strengths

1. Very clear writing and presentation, with helpful figures and diagrams
2. In-depth examination of certain aspects of the proposed method, like the effect of different choices of basis for the continuous feature maps in the limit of large dimension.
3. Rigorous results about universal approximation properties.
4. Multiple demonstrations on various datasets.
5. Novel idea about adapting the continuous basis during training.

Weaknesses

1. Progress specifically on the algorithm for training the model is somewhat incremental.
2. Demonstrations are on somewhat 'academic' datasets.

Report

The authors present an extension to a framework for unsupervised, generative, modeling using networks. Their extension preserves all of the desirable properties of the underlying tensor network model such as canonical forms and perfect sampling. After describing their proposal and its advantages, they study different possible choices of feature functions and what distributions they approach on average in the limit of many features. They also prove rigorously that the models can capture any function in certain limits, and discuss an interesting compression idea that can be used during training to adapt the feature map. The method is demonstrated to give very good results on five different datasets.

I think this work provides a welcome extension to the framework of using tensor networks for machine learning since continuous features is an important case and needs to be explored carefully. Combined with the clear writing, valuable results, and demonstrations I think this paper should be accepted. I do have a few requested changes below.

Requested changes

The authors assume a quantum physics audience in certain parts of the text, like on page 2 where they say that tensor networks are "often thought of as describing many-body wavefunctions" and on page 1 where they say that the restriction to discrete variables can be best understood through a wavefunction picture. I would strongly suggest that the authors rewrite these sentences a bit to acknowledge this is only one perspective, so they could say "often thought by physicists as describing many-body wavefunctions" on page 2 and on page 1 something like "because the BM formalism is primarily used in quantum physics where a TN describes a discrete 'orbital' or site space, it seems natural from that standpoint to use them exclusively for discrete variables".

I would recommend the authors briefly explain in the text what they mean by "DMRG" since it has slightly different definitions to different readers. Usually it means an algorithm for finding a dominant eigenvector of a linear operator, though the term is certainly overloaded as there is also "time-dependent DMRG" etc. Here I think the authors mainly mean the idea of optimizing two cores at a time and using a low-rank factorization to adapt the bond indices afterward, which is a perfectly good definition.

The authors should correct a small typo on page 3 "REf" instead of "Ref" in the second column.

The authors should consider citing the following papers:
1. "Distributive Pre-Training of Generative Modeling Using Matrix-Product States" arxiv:2306.14787, which proposes similar ideas about continuous feature maps, though used in a different way
2. "Learning multidimensional Fourier series with tensor trains" by Wahls, which includes the idea of continuous inputs to tensor networks though for regression
3. "Generative modeling via tensor train sketching" by Hur et al., which implements a rather different algorithm for generative modeling with tensor networks besides the present one and the cross approximation metioned by the authors
4. "Quantized Fourier and Polynomial Features for more Expressive Tensor Network Models" by Wesel and Batselier, which considers inputting continuous variables by "digitizing" them using the idea of 'quantics' or 'quantized' encoding.

Recommendation

Publish (easily meets expectations and criteria for this Journal; among top 50%)

  • validity: top
  • significance: high
  • originality: high
  • clarity: top
  • formatting: perfect
  • grammar: perfect

Anonymous Report 1 on 2024-5-20 (Invited Report)

Strengths

1. The context and motivation are clearly presented.
2. The benefits and interesting properties of the proposed method are well articulated.
3. The presented method is well analyzed from a theoretical standpoint.
4. The text is generally well-written and clear.

Weaknesses

1. Lack of comparison between the obtained numerical results and other state-of-the-art methods.
2. Lack of benchmarking on tasks that challenge the performance of state-of-the-art methods.

Report

In this work, the authors propose a modification of tensor network generative models that enables the learning of probability distributions containing continuous random variables. The context, related works, and motivation for their work are presented clearly. The properties of the proposed method—including sampling, training methods, feature maps, and universal approximation—are well-analyzed from a theoretical standpoint. The authors also test the method numerically on five distinct tasks/datasets. In this section, I think it would be beneficial to compare the obtained results with those obtained using other state-of-the-art methods in the literature. Such a comparison with the variational autoencoder is provided in the section about the XY model, but is absent in other sections that present results on more commonly used datasets for benchmarking, such as Two Moons and Iris. Moreover, all benchmarking tasks explored in this work, except for the XY model, could be regarded as toy tasks, which are much simpler than what is accomplished with state-of-the-art distribution learning methods. I understand that the paper is focused on presenting this method and applying it to complex tasks might be out of scope. However, in my opinion, demonstrating the capability of the model on a task closer to real-world scenarios would significantly strengthen the presentation and allow for a better understanding of the practical benefits and limitations of the method.

All in all, I think the work is well-written and presents significant results, so it should be accepted for publication. The proposed improvements are optional.

Requested changes

1. (Optional) A table comparing the performance of the proposed method on benchmark tasks with other methods in the literature.
2. (Optional) An additional section exploring the performance on a more challenging task.

Recommendation

Publish (meets expectations and criteria for this Journal)

  • validity: high
  • significance: high
  • originality: good
  • clarity: high
  • formatting: perfect
  • grammar: perfect

Login to report or comment