Overview

  • Project funded by SERB.
  • Project duration- Mar 2022 to Mar 2025

Description

Machine learning (ML) and Physics complement each other. While Physics is rigorous and precise, ML is efficient for scaling up analysis and control to large data sizes.

In this project, we are developing ML methods for computational Physics, applied to lattice quantum chromodynamics (LQCD) and condensed matter Physics. So far, we have developed efficient sampling methods to study large statistical systems such as the Gross Neveu (GN) model [1], scalar \(\phi^4\) theory [2], XY model [4] and guage theory [3]. From Physics perspective, our methods are useful to study phase transitions and continuum limits. These methods can be applied to very large model sizes, where conventional Monte Carlo methods and even general ML methods fail. From ML perspective, our methods are useful for conditioning the generative ML methods, avoid the problem of mode-collapse and learning from small number of ground truth samples.

Publications

  1. Ankur Singha, Dipankar Chakrabarti, and Vipul Arora, “Generative learning for the problem of critical slowing down in lattice Gross-Neveu model”, in SciPost Physics Core (2022).
  2. Ankur Singha, Dipankar Chakrabarti, and Vipul Arora. “Conditional normalizing flow for Markov chain Monte Carlo sampling in the critical region of lattice field theory.” Physical Review D 107, no. 1 (2023): 014512.
  3. Ankur Singha, Dipankar Chakrabarti, and Vipul Arora. “Sampling U(1) gauge theory using a retrainable conditional flow-based model.” Physical Review D 108 (2023) 7, 7.
  4. Vikas Kanaujia, Mathias S. Scheurer, and Vipul Arora. “AdvNF: Reducing Mode Collapse in Conditional Normalising Flows using Adversarial Learning.” arXiv preprint arXiv:2401.15948 (2024).