Info
A follow-up to the RBM scaling study with a focus on RBM overparametrization and pruning specifically. Similar to some pruning studies in ML, we find that pruning has destructive effects that might go unnoticed if only general performance metrics are monitored. Published in PRB.
Abstract
Restricted Boltzmann machines (RBMs) have proven to be a powerful tool for learning quantum wave-function representations from qubit projective measurement data. Since the number of classical parameters needed to encode a quantum wave function scales rapidly with the number of qubits, the ability to learn efficient representations is of critical importance. In this paper, we study magnitude-based pruning as a way to compress the wave-function representation in an RBM, focusing on RBMs trained on data from the transverse-field Ising model in one dimension. We find that pruning can reduce the total number of RBM weights, but the threshold at which the reconstruction accuracy starts to degrade varies significantly depending on the phase of the model. In a gapped region of the phase diagram, the RBM admits pruning over half of the weights while still accurately reproducing relevant physical observables. At the quantum critical point, however, even a small amount of pruning can lead to significant loss of accuracy in the physical properties of the reconstructed quantum state. Our results highlight the importance of tracking all relevant observables as their sensitivity varies strongly with pruning. Finally, we find that sparse RBMs are trainable, and we discuss how a successful sparsity pattern can be created without pruning.
Talk
I presented this work in an invited talk titled “The efficiency of machine-learning quantum states” at the Condensed Matter Theory seminar at Harvard. Alas, the recording is not available.