(Jay) Digvijay Wadekar

Here is the link to my publications. See below for a brief description of a few selected projects.

Using astrophysical systems to probe alternatives to cold, collisionless dark matter
I worked on placing stringent bounds on popular alternative DM models such as millicharged DM and hidden photon DM by using observations of a gas-rich dwarf galaxy of the Milky Way called Leo T. We required that the heating/cooling due to DM interacting with gas in the Leo T not exceed the radiative cooling rate of the gas. This gives one of the strongest constraints on dark photon dark matter and axion-like particles and adds to limits on millicharged DM and magnetic primordial black holes.
Using neural networks to generate neutral hydrogen from dark matter (and learn new physics)
Hydrodynamic simulations have a huge computational cost (~10 million CPU hours for 0.001 Gpc^3 volume) and cannot therefore be directly used in predictions for upcoming surveys probing ~100 Gpc^3 volumes. Focusing on neutral hydrogen (HI), I worked on training convolutional neural networks using the IllustrisTNG simulation to quickly generate accurate HI maps from N-body simulations. Our model performs better than the widely used theoretical model: Halo Occupation Distribution (HOD) for all statistical properties up to the non-linear scales.

Although neural networks have been shown to outperform traditional theoretical techniques in many areas of physics, it is difficult to interpret them. I am interested in using machine learning to infer useful additions to our physical theories. We recently inferred that the environmental of a dark matter halo has a crucial effect on its HI content and we used symbolic regression to infer a novel symbolic expression for encoding this effect of the environment.

Analytic covariance matrices for upcoming galaxy surveys
In order to infer cosmological parameters from galaxy survey data, we typically use summary statistics such as the power spectrum and need an accurate estimate of their covariance matrix for the likelihood. The traditional process of obtaining the covariance involves simulating thousands of mock catalogs. We developed a novel analytic method to compute the covariance matrix which is more than four orders of magnitude faster and has excellent agreement with the state-of-the-art mock simulations upto non-linear scales (k~0.6 h/Mpc). We also validated our analytic method by using it to analyze the full-shape SDSS-BOSS survey data. I am now trying to generalize our technique to calculate covariance of bispectrum, which is computationally prohibitive using mock simulations.

The video below is from one of my online talks