Skip to content

Commit

Permalink
tutorial changes
Browse files Browse the repository at this point in the history
  • Loading branch information
Potatoasad committed Oct 28, 2024
1 parent faa8860 commit 29c1fe5
Showing 1 changed file with 39 additions and 33 deletions.
72 changes: 39 additions & 33 deletions docs/Examples/gravpop_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,17 @@
"cells": [
{
"cell_type": "markdown",
"id": "b76f782c",
"id": "32ac9285",
"metadata": {},
"source": [
"# Gravpop tutorial\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"This is a library that allows you to perform a population analysis, ala [Thrane et. al](https://arxiv.org/abs/1809.02293), but using a trick described in [Hussain et. al](...) that allows one to be able to probe population features even when they get very narrow, and get close to the edges of a bounded domain. \n",
"\n",
"The trick essentially relies on dividing the parameter space into a sector (which we call the __analytic__ sector $\\theta^a$) where our population model is made out of some weighted sum of multivariate truncated normals - where we can analytically compute the population likelihood, and another where the model is general and we can compute it using the monte-carlo estimate of the population likelihood (we call this sector the __sampled__ sector $\\theta^s$). \n",
Expand Down Expand Up @@ -71,7 +77,7 @@
{
"cell_type": "code",
"execution_count": 3,
"id": "b1a23da0",
"id": "3400f274",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -275,7 +281,7 @@
},
{
"cell_type": "markdown",
"id": "c3b69488",
"id": "af24f0c3",
"metadata": {},
"source": [
"We can then fit the events we want to TGMMs. Note that the `.dataproduct()` method of the TGMM class provides the fitted data in the format that is required by `gravpop`."
Expand All @@ -284,7 +290,7 @@
{
"cell_type": "code",
"execution_count": 4,
"id": "8926b86d",
"id": "605cd216",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -337,7 +343,7 @@
},
{
"cell_type": "markdown",
"id": "15496b80",
"id": "9ece8b92",
"metadata": {},
"source": [
"We can now construct the data product we need:\n",
Expand All @@ -347,7 +353,7 @@
{
"cell_type": "code",
"execution_count": 5,
"id": "af61ce61",
"id": "f97e716f",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -388,7 +394,7 @@
},
{
"cell_type": "markdown",
"id": "71db1530",
"id": "0cf13137",
"metadata": {},
"source": [
"we now have our data in the correct format. \n",
Expand Down Expand Up @@ -431,15 +437,15 @@
},
{
"cell_type": "markdown",
"id": "5c02d0b1",
"id": "3554d94c",
"metadata": {},
"source": [
"# Models"
]
},
{
"cell_type": "markdown",
"id": "fcc567e4",
"id": "a92b56fa",
"metadata": {},
"source": [
"One can specify population models using a set of building block models. Each population model is defined as a distributions over some parameters $\\theta$, defined below by `var_names`, and some hyper-parameters $\\Lambda$, defined below by `hyper_var_names`. \n",
Expand Down Expand Up @@ -486,7 +492,7 @@
{
"cell_type": "code",
"execution_count": 1,
"id": "fa13cbe8",
"id": "d79b6aff",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -512,7 +518,7 @@
},
{
"cell_type": "markdown",
"id": "c10fb445",
"id": "9abb71b8",
"metadata": {},
"source": [
"We can combine these building blocks however we like. Using the following operations:\n",
Expand All @@ -532,7 +538,7 @@
{
"cell_type": "code",
"execution_count": 2,
"id": "3846c608",
"id": "455cd87a",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -562,7 +568,7 @@
},
{
"cell_type": "markdown",
"id": "8706088c",
"id": "fe8759a0",
"metadata": {},
"source": [
"One can then evaluate this spin model on some set parameters"
Expand All @@ -571,7 +577,7 @@
{
"cell_type": "code",
"execution_count": 9,
"id": "e880c5f9",
"id": "57e76904",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -600,7 +606,7 @@
},
{
"cell_type": "markdown",
"id": "941211a5",
"id": "24618218",
"metadata": {},
"source": [
"## Sampled Models\n",
Expand All @@ -627,7 +633,7 @@
},
{
"cell_type": "markdown",
"id": "b473e5ad",
"id": "ffa7625f",
"metadata": {},
"source": [
"# Population Likelihood\n",
Expand All @@ -648,7 +654,7 @@
{
"cell_type": "code",
"execution_count": 15,
"id": "425accdc",
"id": "9490cfa7",
"metadata": {},
"outputs": [
{
Expand All @@ -675,7 +681,7 @@
},
{
"cell_type": "markdown",
"id": "b51cdc38",
"id": "895e0eec",
"metadata": {},
"source": [
"We can compute the loglikelihood for some hyper-parameters, and also confirm by computing the derivative that there are no nan derivatives.\n",
Expand All @@ -686,7 +692,7 @@
{
"cell_type": "code",
"execution_count": 16,
"id": "b00b1586",
"id": "935f3404",
"metadata": {},
"outputs": [
{
Expand All @@ -710,7 +716,7 @@
},
{
"cell_type": "markdown",
"id": "1d61327c",
"id": "cbf9cc31",
"metadata": {},
"source": [
"All our models are auto-diff-able, so we can compute the gradient of the logpdf as below:"
Expand All @@ -719,7 +725,7 @@
{
"cell_type": "code",
"execution_count": 18,
"id": "ba65cdd4",
"id": "a053808c",
"metadata": {},
"outputs": [
{
Expand All @@ -745,7 +751,7 @@
},
{
"cell_type": "markdown",
"id": "7e11c500",
"id": "86c88b47",
"metadata": {},
"source": [
"One can also load up the event and selection function data from a file:"
Expand All @@ -754,7 +760,7 @@
{
"cell_type": "code",
"execution_count": 19,
"id": "96c86fd5",
"id": "9f1b8c10",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -770,7 +776,7 @@
},
{
"cell_type": "markdown",
"id": "a1be31ef",
"id": "7b98306c",
"metadata": {},
"source": [
"# Sampling\n",
Expand All @@ -783,7 +789,7 @@
{
"cell_type": "code",
"execution_count": 22,
"id": "83ce1f8d",
"id": "8cd350ea",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -801,7 +807,7 @@
},
{
"cell_type": "markdown",
"id": "3bd0c662",
"id": "4701fe32",
"metadata": {},
"source": [
"Then, we can construct a `Sampler` object and put in our settings."
Expand All @@ -810,7 +816,7 @@
{
"cell_type": "code",
"execution_count": 24,
"id": "23f28bb0",
"id": "e432d77c",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -824,7 +830,7 @@
},
{
"cell_type": "markdown",
"id": "5e388f7b",
"id": "08a66057",
"metadata": {},
"source": [
"and we can begin sampling"
Expand All @@ -833,7 +839,7 @@
{
"cell_type": "code",
"execution_count": 25,
"id": "b7de385f",
"id": "251e1512",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -865,7 +871,7 @@
},
{
"cell_type": "markdown",
"id": "3c5a1549",
"id": "6d494a74",
"metadata": {},
"source": [
"we can see the dataframe holding the hyper-posterior samples in:"
Expand All @@ -874,7 +880,7 @@
{
"cell_type": "code",
"execution_count": 28,
"id": "b26d5de3",
"id": "6c1c5497",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -1027,7 +1033,7 @@
},
{
"cell_type": "markdown",
"id": "1f40581d",
"id": "c0db874e",
"metadata": {},
"source": [
"and here is a corner plot of our result"
Expand All @@ -1036,7 +1042,7 @@
{
"cell_type": "code",
"execution_count": 31,
"id": "f11e5421",
"id": "bed347c9",
"metadata": {},
"outputs": [
{
Expand Down

0 comments on commit 29c1fe5

Please sign in to comment.