Overview
- Project funded by SERB.
- Project duration- Mar 2022 to Mar 2025
Description
Machine learning (ML) and Physics complement each other. While Physics is rigorous and precise, ML is efficient for scaling up analysis and control to large data sizes.
In this project, we are developing ML methods for computational Physics, applied to lattice quantum chromodynamics (LQCD) and condensed matter Physics. So far, we have developed efficient sampling methods to study large statistical systems such as the Gross Neveu (GN) model [1], scalar \(\phi^4\) theory [2], XY model [4] and guage theory [3]. From Physics perspective, our methods are useful to study phase transitions and continuum limits. These methods can be applied to very large model sizes, where conventional Monte Carlo methods and even general ML methods fail. From ML perspective, our methods are useful for conditioning the generative ML methods, avoid the problem of mode-collapse and learning from small number of ground truth samples.
Objective
This project aims to develop advanced machine learning (ML) techniques to address key challenges in computational physics, with a focus on lattice quantum chromodynamics (LQCD) and condensed matter physics models.
Summary
The project aims at developing efficient sampling methods for large-scale statistical systems such as the Gross-Neveu (GN) model, scalar theory, XY model, and U(1) gauge theory. We use generative-AI methods to overcome traditional limitations like critical slowing down, mode collapse, and limited ground truth availability.
Key Achievements & Impact
Innovative Techniques Developed:
-
Developed efficient sampling methods for large-scale statistical systems such as the Gross-Neveu (GN) model, scalar theory, XY model, and U(1) gauge theory.
-
ML methods help overcome traditional limitations like critical slowing down, mode collapse, and limited ground truth availability.
Scientific Contribution:
-
Enables accurate study of phase transitions and continuum limits in physical systems, especially where conventional Monte Carlo and existing ML techniques fail.
-
Offers scalable and conditioned generative models that are useful for large data simulations in physics.
Publications in High-Impact Journals:
- 5 high-quality research papers published in SciPost Physics and Physical Review D (2022-2024), demonstrating both ML innovation and fundamental physics contributions.
Dual Impact:
-
Physics Benefit: Robust methods for studying critical phenomena and large lattice models.
-
ML Benefit: Novel strategies for improving generative models applicable beyond physics.
Publications
- Ankur Singha, Dipankar Chakrabarti, and Vipul Arora, “Generative learning for the problem of critical slowing down in lattice Gross-Neveu model”, in SciPost Physics Core (2022).
- Ankur Singha, Dipankar Chakrabarti, and Vipul Arora. “Conditional normalizing flow for Markov chain Monte Carlo sampling in the critical region of lattice field theory.” Physical Review D 107, no. 1 (2023): 014512.
- Ankur Singha, Dipankar Chakrabarti, and Vipul Arora. “Sampling U(1) gauge theory using a retrainable conditional flow-based model.” Physical Review D 108 (2023) 7, 7.
- Vikas Kanaujia, Mathias S. Scheurer, and Vipul Arora. “AdvNF: Reducing Mode Collapse in Conditional Normalising Flows using Adversarial Learning.” arXiv preprint arXiv:2401.15948 (2024).