A component of the data release for "The Population of Merging Compact Binaries Inferred Using Gravitational Waves Through GWTC-3"
Authors: Bruce Edelman, Amanda Farah, and Jacob Golomb on behalf of the LIGO Scientific Collaboration, Virgo Collaboration and KAGRA Collaboration
This software is provided under license: Creative Commons Attribution 4.0 International (https://creativecommons.org/licenses/by/4.0/legalcode)
This notebook is intented to show readers how to load in the data associated with some of the gwpopulation
-based models used in "The population of merging compact binaries inferred using gravitational waves through GWTC-3" (herein "the companion paper").
Here, we will show you how to load in hyperposterior samples, plot probability distributions as a function of source parameters, and make less-polished versions of the plots in the paper.
The models highlighed in this particular tutorial are:
These are all included in or built off of the open source python package, gwpopulation
, which includes several other parameterized distributions as well, many of which are highlighted in "Population Properties of Compact Objects from the Second LIGO-Virgo Gravitational-Wave Transient Catalog."
Note that other parameterized (MultiSource and NS mass models) and non-parametric (Binned Gaussian Process, Flexible Mixtures) models not based on gwpopulation
exist and are used in the companion paper. There are separate tutorials for those.
This notebook will not show you how to run the population analysis. If you are intersted in this, please look at the documention for gwpopulation
: https://colmtalbot.github.io/gwpopulation/
from gwpopulation.models.redshift import PowerLawRedshift
from bilby.core.result import read_in_result
from scipy.interpolate import interp1d
import numpy as np
import deepdish as dd
import matplotlib.pyplot as plt
We will begin by reading in the results of the population inference on our default event list for the BBH population. In this example, we will read in the results for the Powerlaw + Peak model. These results are stored as bilby result files in this data repository. A description of the parameters used in this model can be found in Appendix B1 of the companion paper.
PP_path = '../analyses/PowerLawPeak/o1o2o3_mass_c_iid_mag_iid_tilt_powerlaw_redshift_result.json'
PP_result = read_in_result(PP_path)
There is a lot of information in the Result
object. Lets take a look at some useful attributes.
print("run label: ", PP_result.label)
print("log10 BF: ", PP_result.log_10_bayes_factor) # note that this will not always be the same BF that is quoted in the text because some corrections are necessary to compare these between models
print("\n", "hyper-priors:")
PP_result.priors
run label: o1o2o3_mass_c_iid_mag_iid_tilt_powerlaw_redshift log10 BF: 38.46431085780439 hyper-priors:
{'alpha': Uniform(minimum=-4, maximum=12, name='alpha', latex_label='$\\alpha$', unit=None, boundary=None), 'beta': Uniform(minimum=-2, maximum=7, name='beta', latex_label='$\\beta_{q}$', unit=None, boundary=None), 'mmax': Uniform(minimum=30, maximum=100, name='mmax', latex_label='$m_{\\max}$', unit=None, boundary=None), 'mmin': Uniform(minimum=2, maximum=10, name='mmin', latex_label='$m_{\\min}$', unit=None, boundary=None), 'lam': Uniform(minimum=0, maximum=1, name='lambda', latex_label='$\\lambda_{m}$', unit=None, boundary=None), 'mpp': Uniform(minimum=20, maximum=50, name='mpp', latex_label='$\\mu_{m}$', unit=None, boundary=None), 'sigpp': Uniform(minimum=1, maximum=10, name='sigpp', latex_label='$\\sigma_{m}$', unit=None, boundary=None), 'delta_m': Uniform(minimum=0, maximum=10, name='delta_m', latex_label='$\\delta_{m}$', unit=None, boundary='reflective'), 'amax': DeltaFunction(peak=1.0, name=None, latex_label=None, unit=None), 'mu_chi': Uniform(minimum=0, maximum=1, name='mu_chi', latex_label='$\\mu_{\\chi}$', unit=None, boundary=None), 'sigma_chi': Uniform(minimum=0.005, maximum=0.25, name='sigma_chi', latex_label='$\\sigma^{2}_{\\chi}$', unit=None, boundary=None), 'alpha_chi': Constraint(minimum=1, maximum=100000.0, name=None, latex_label=None, unit=None), 'alpha_chi_1': Constraint(minimum=1, maximum=100000.0, name=None, latex_label=None, unit=None), 'alpha_chi_2': Constraint(minimum=1, maximum=100000.0, name=None, latex_label=None, unit=None), 'beta_chi': Constraint(minimum=1, maximum=100000.0, name=None, latex_label=None, unit=None), 'beta_chi_1': Constraint(minimum=1, maximum=100000.0, name=None, latex_label=None, unit=None), 'beta_chi_2': Constraint(minimum=1, maximum=100000.0, name=None, latex_label=None, unit=None), 'xi_spin': Uniform(minimum=0, maximum=1, name='xi_spin', latex_label='$\\zeta$', unit=None, boundary=None), 'sigma_spin': Uniform(minimum=0.1, maximum=4.0, name='sigma_t', latex_label='$\\sigma_t$', unit=None, boundary=None), 'lamb': Uniform(minimum=-10, maximum=10, name='lamb', latex_label='$\\lambda_z$', unit=None, boundary=None)}
What you are probably most intersted in are the hyperposterior samples, which are saved as a pandas
dataframe.
PP_hyperposterior_samples = PP_result.posterior.copy() # making a copy is best practice here so you don't accidentally modify things in-place
PP_hyperposterior_samples
alpha | beta | mmax | mmin | lam | mpp | sigpp | delta_m | mu_chi | sigma_chi | ... | lamb | amax | log_likelihood | log_prior | selection | pdet_n_effective | surveyed_hypervolume | log_10_rate | rate | min_event_n_effective | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2.116671 | 0.645405 | 84.257180 | 3.858558 | 0.075468 | 30.270897 | 8.346910 | 3.511792 | 0.262951 | 0.033104 | ... | -1.007767 | 1.0 | 86.740514 | -20.167742 | 0.006983 | 6630.971573 | 300.455132 | 1.475926 | 29.917560 | 205.407820 |
1 | 2.357356 | 0.504086 | 78.325899 | 5.301450 | 0.123904 | 38.688342 | 4.317169 | 2.576325 | 0.247116 | 0.028849 | ... | -2.460752 | 1.0 | 88.476899 | -20.167742 | 0.016273 | 3772.629171 | 107.519753 | 1.521225 | 33.206611 | 84.680631 |
2 | 2.763145 | -1.038759 | 92.335648 | 4.512187 | 0.136947 | 26.322204 | 9.283129 | 3.411572 | 0.201743 | 0.024835 | ... | 4.922529 | 1.0 | 89.160301 | -20.167742 | 0.000089 | 4852.535111 | 69321.160923 | 1.075940 | 11.910773 | 107.429424 |
3 | 2.975530 | 1.204292 | 81.253295 | 5.193624 | 0.095203 | 40.661528 | 6.232922 | 4.752846 | 0.223713 | 0.024432 | ... | -1.466976 | 1.0 | 89.577067 | -20.167742 | 0.010964 | 4340.422763 | 213.598090 | 1.485615 | 30.592485 | 89.268950 |
4 | 2.336330 | 0.888358 | 83.282458 | 3.218458 | 0.150673 | 27.806907 | 6.098497 | 5.846578 | 0.240640 | 0.019683 | ... | 1.572624 | 1.0 | 89.923126 | -20.167742 | 0.001794 | 5813.075241 | 2629.581289 | 1.171681 | 14.848441 | 80.848769 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
11464 | 3.417291 | 0.301697 | 85.168951 | 4.798439 | 0.035291 | 33.889739 | 2.350728 | 5.505413 | 0.230583 | 0.020566 | ... | 3.264948 | 1.0 | 108.115543 | -20.167742 | 0.000339 | 2972.807715 | 13097.278496 | 1.214318 | 16.380142 | 71.881363 |
11465 | 3.328885 | 1.642387 | 79.491399 | 4.931802 | 0.034236 | 34.720684 | 2.136944 | 5.327197 | 0.254585 | 0.032604 | ... | 2.109967 | 1.0 | 108.120336 | -20.167742 | 0.000872 | 3152.917529 | 4322.664396 | 1.369043 | 23.390707 | 72.814645 |
11466 | 3.328885 | 1.642387 | 79.491399 | 4.931802 | 0.034236 | 34.720684 | 2.136944 | 5.327197 | 0.254585 | 0.032604 | ... | 2.109967 | 1.0 | 108.120336 | -20.167742 | 0.000872 | 3152.917529 | 4322.664396 | 1.288162 | 19.416100 | 72.814645 |
11467 | 3.548725 | 0.760204 | 83.140164 | 4.815572 | 0.019037 | 34.467261 | 1.866748 | 5.453627 | 0.235676 | 0.024758 | ... | 3.431551 | 1.0 | 108.206956 | -20.167742 | 0.000256 | 3214.364467 | 15429.210409 | 1.259669 | 18.183135 | 76.580270 |
11468 | 3.548725 | 0.760204 | 83.140164 | 4.815572 | 0.019037 | 34.467261 | 1.866748 | 5.453627 | 0.235676 | 0.024758 | ... | 3.431551 | 1.0 | 108.206956 | -20.167742 | 0.000256 | 3214.364467 | 15429.210409 | 1.184209 | 15.283016 | 76.580270 |
11469 rows × 22 columns
We can pull 1D marginal distributions from this by picking out a specific column. As an example, lets try the powerlaw slope on m1
plt.hist(PP_hyperposterior_samples.alpha, bins=50, density=True, histtype='step')
plt.xlabel(r"$\alpha$")
plt.show()
We can plot all of these samples with their many correlations using a corner plot:
c = PP_result.plot_corner(save=False)