Discovering Supernova in SDSS Galaxy Spectra

The post below was contributed by Dr. Or Graur, an assistant research scientist at New York University and research associate at the American Museum of Natural History. He recently led a paper based on supernovae detected in SDSS galaxy spectra (published in the Monthly Notices of the Royal Astronomical Society; the full text is available at: http://adsabs.harvard.edu/abs/2015MNRAS.450..905G).

 paper_header

One of the great things about the SDSS is that it can be used in ways that its creators may never have envisioned. The SDSS collected ~800,000 galaxy spectra. As luck would have it, some of those galaxies happened to host supernovae, the explosions of stars, inside the area covered by the SDSS spectral fiber during the exposure time. These supernovae would then “contaminate” the galaxy spectra. In Graur & Maoz (2013), we developed a computer code that allowed us to identify such contaminated spectra and tweeze out the supernovae from the data. In Graur et al. (2015), we used this code to detect 91 Type Ia and 16 Type II supernovae.

 

GM13_method

A galaxy+supernova model (blue) fits the SDSS spectrum (grey) much better than a galaxy-only model (green). The residual spectrum (lower panel, grey), after subtracting the galaxy component, is best-fit by a Type Ia supernova template (red).

With these samples, we measured the explosion rates of Type Ia and Type II supernovae as a function of various galaxy properties: stellar mass, star-formation rate, and specific star-formation rate. All of these properties were previously measured by the SDSS MPA-JHU Galspec pipeline.

 

In 2011, the Lick Observatory Supernova Search published a curious finding: the rates of all supernovae, normalized by the stellar mass of their host galaxies, declined with increasing stellar mass (instead of being independent of it; Li et al. 2011b). We confirmed this correlation, showed that the rates were also correlated with other galaxy properties, and demostrated that all these correlations could be explained by two simple models.

 

Type Ia supernovae, which are thought to be the explosions of carbon-oxygen white dwarfs, follow a delay-time distribution. Unlike massive stars, which explode rather quickly after they are born (millions of years, typically), Type Ia supernovae take their time – some explode soon after their white dwarfs are formed, while others blow up billions of years later. We showed that this delay-time distribution (best described as a declining power law with an index of -1), coupled with galaxy downsizing (i.e., older galaxies tend to be more massive than younger ones), explained not only the correlation between the rates and the galaxies’ stellar masses, but also their correlations with other galaxy properties.

sim_mass_fit

Type Ia supernova rates as a function of galaxy stellar mass

sim_sSFR_fit

Type Ia supernova rates as a function of specific star formation rate.

Simulated rates, based on a model combining galaxy downsizing and the delay-time distribution, are shown as a grey curve on both the above plots. This model is fit to the rates as a function of mass and then re-binned and plotted on the specific star formation rate plot, without further fitting.

For Type II supernovae, which explode promptly after star formation, the correlations are easier to explain; they are simply dependent on the current star-formation rates of the galaxies: the more efficient the galaxy is at producing stars, the more efficient it will be at producing Type II supernovae.

All of the supernova spectra from Graur & Maoz (2013) and Graur et al. (2015) are publicly available from the Weizmann Interactive Supernova data REPository (http://wiserep.weizmann.ac.il/). Please note that their continuua may be warped by our detection method (for details, see section 3 of Graur & Maoz 2013).

Leave a Reply

Your email address will not be published. Required fields are marked *