Mutual information maximizing quantum generative adversarial networks

ArXiv ID: 2309.01363 “View on arXiv”

Authors: Unknown

Abstract

One of the most promising applications in the era of Noisy Intermediate-Scale Quantum (NISQ) computing is quantum generative adversarial networks (QGANs), which offer significant quantum advantages over classical machine learning in various domains. However, QGANs suffer from mode collapse and lack explicit control over the features of generated outputs. To overcome these limitations, we propose InfoQGAN, a novel quantum-classical hybrid generative adversarial network that integrates the principles of InfoGAN with a QGAN architecture. Our approach employs a variational quantum circuit for data generation, a classical discriminator, and a Mutual Information Neural Estimator (MINE) to explicitly optimize the mutual information between latent codes and generated samples. Numerical simulations on synthetic 2D distributions and Iris dataset augmentation demonstrate that InfoQGAN effectively mitigates mode collapse while achieving robust feature disentanglement in the quantum generator. By leveraging these advantages, InfoQGAN not only enhances training stability but also improves data augmentation performance through controlled feature generation. These results highlight the potential of InfoQGAN as a foundational approach for advancing quantum generative modeling in the NISQ era.

Keywords: Quantum Generative Adversarial Networks (QGANs), variational quantum circuit, mutual information neural estimator (MINE), feature disentanglement, NISQ computing, General (Synthetic Data Augmentation)

Complexity vs Empirical Score

  • Math Complexity: 8.5/10
  • Empirical Rigor: 2.0/10
  • Quadrant: Lab Rats
  • Why: The paper is dense with advanced mathematics including variational quantum circuits, mutual information estimation via MINE, and complex optimization, but it relies solely on numerical simulations using synthetic and standard toy datasets without any backtesting or real-world implementation details.
  flowchart TD
    A["Research Goal: <br>Improve QGAN stability <br>& feature control"] --> B["Propose InfoQGAN <br>Hybrid Architecture"]
    
    subgraph B["Proposed InfoQGAN Architecture"]
        B1["Variational Quantum Circuit <br>(Generator)"]
        B2["Classical Discriminator"]
        B3["Mutual Information Neural Estimator <br>(MINE)"]
        B1 --> B3
        B3 --> B1
    end

    B --> C["Data Inputs"]
    subgraph C["Data & Latent Codes"]
        C1["Synthetic 2D Distributions"]
        C2["Iris Dataset"]
        C3["Latent Codes c"]
    end

    C --> D["Computational Process"]
    subgraph D["Training Loop"]
        D1["Generator Gz produces samples"]
        D2["Discriminator distinguishes real/fake"]
        D3["MINE optimizes I(c; Gz)"]
        D1 --> D2
        D2 --> D1
        D3 --> D1
    end

    D --> E["Key Findings / Outcomes"]
    subgraph E["Results"]
        E1["Mitigated Mode Collapse"]
        E2["Robust Feature Disentanglement"]
        E3["Enhanced Training Stability"]
        E4["Improved Data Augmentation"]
    end