From 29c1fe5a49a948d4bd8b76c5da4069332dfde201 Mon Sep 17 00:00:00 2001 From: Asad Hussain Date: Mon, 28 Oct 2024 13:46:03 -0500 Subject: [PATCH] tutorial changes --- docs/Examples/gravpop_tutorial.ipynb | 72 +++++++++++++++------------- 1 file changed, 39 insertions(+), 33 deletions(-) diff --git a/docs/Examples/gravpop_tutorial.ipynb b/docs/Examples/gravpop_tutorial.ipynb index 5a91069..a0ba0f3 100644 --- a/docs/Examples/gravpop_tutorial.ipynb +++ b/docs/Examples/gravpop_tutorial.ipynb @@ -2,11 +2,17 @@ "cells": [ { "cell_type": "markdown", - "id": "b76f782c", + "id": "32ac9285", "metadata": {}, "source": [ "# Gravpop tutorial\n", "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "This is a library that allows you to perform a population analysis, ala [Thrane et. al](https://arxiv.org/abs/1809.02293), but using a trick described in [Hussain et. al](...) that allows one to be able to probe population features even when they get very narrow, and get close to the edges of a bounded domain. \n", "\n", "The trick essentially relies on dividing the parameter space into a sector (which we call the __analytic__ sector $\\theta^a$) where our population model is made out of some weighted sum of multivariate truncated normals - where we can analytically compute the population likelihood, and another where the model is general and we can compute it using the monte-carlo estimate of the population likelihood (we call this sector the __sampled__ sector $\\theta^s$). \n", @@ -71,7 +77,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "b1a23da0", + "id": "3400f274", "metadata": {}, "outputs": [ { @@ -275,7 +281,7 @@ }, { "cell_type": "markdown", - "id": "c3b69488", + "id": "af24f0c3", "metadata": {}, "source": [ "We can then fit the events we want to TGMMs. Note that the `.dataproduct()` method of the TGMM class provides the fitted data in the format that is required by `gravpop`." @@ -284,7 +290,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "8926b86d", + "id": "605cd216", "metadata": {}, "outputs": [ { @@ -337,7 +343,7 @@ }, { "cell_type": "markdown", - "id": "15496b80", + "id": "9ece8b92", "metadata": {}, "source": [ "We can now construct the data product we need:\n", @@ -347,7 +353,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "af61ce61", + "id": "f97e716f", "metadata": {}, "outputs": [ { @@ -388,7 +394,7 @@ }, { "cell_type": "markdown", - "id": "71db1530", + "id": "0cf13137", "metadata": {}, "source": [ "we now have our data in the correct format. \n", @@ -431,7 +437,7 @@ }, { "cell_type": "markdown", - "id": "5c02d0b1", + "id": "3554d94c", "metadata": {}, "source": [ "# Models" @@ -439,7 +445,7 @@ }, { "cell_type": "markdown", - "id": "fcc567e4", + "id": "a92b56fa", "metadata": {}, "source": [ "One can specify population models using a set of building block models. Each population model is defined as a distributions over some parameters $\\theta$, defined below by `var_names`, and some hyper-parameters $\\Lambda$, defined below by `hyper_var_names`. \n", @@ -486,7 +492,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "fa13cbe8", + "id": "d79b6aff", "metadata": {}, "outputs": [], "source": [ @@ -512,7 +518,7 @@ }, { "cell_type": "markdown", - "id": "c10fb445", + "id": "9abb71b8", "metadata": {}, "source": [ "We can combine these building blocks however we like. Using the following operations:\n", @@ -532,7 +538,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "3846c608", + "id": "455cd87a", "metadata": {}, "outputs": [], "source": [ @@ -562,7 +568,7 @@ }, { "cell_type": "markdown", - "id": "8706088c", + "id": "fe8759a0", "metadata": {}, "source": [ "One can then evaluate this spin model on some set parameters" @@ -571,7 +577,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "e880c5f9", + "id": "57e76904", "metadata": {}, "outputs": [ { @@ -600,7 +606,7 @@ }, { "cell_type": "markdown", - "id": "941211a5", + "id": "24618218", "metadata": {}, "source": [ "## Sampled Models\n", @@ -627,7 +633,7 @@ }, { "cell_type": "markdown", - "id": "b473e5ad", + "id": "ffa7625f", "metadata": {}, "source": [ "# Population Likelihood\n", @@ -648,7 +654,7 @@ { "cell_type": "code", "execution_count": 15, - "id": "425accdc", + "id": "9490cfa7", "metadata": {}, "outputs": [ { @@ -675,7 +681,7 @@ }, { "cell_type": "markdown", - "id": "b51cdc38", + "id": "895e0eec", "metadata": {}, "source": [ "We can compute the loglikelihood for some hyper-parameters, and also confirm by computing the derivative that there are no nan derivatives.\n", @@ -686,7 +692,7 @@ { "cell_type": "code", "execution_count": 16, - "id": "b00b1586", + "id": "935f3404", "metadata": {}, "outputs": [ { @@ -710,7 +716,7 @@ }, { "cell_type": "markdown", - "id": "1d61327c", + "id": "cbf9cc31", "metadata": {}, "source": [ "All our models are auto-diff-able, so we can compute the gradient of the logpdf as below:" @@ -719,7 +725,7 @@ { "cell_type": "code", "execution_count": 18, - "id": "ba65cdd4", + "id": "a053808c", "metadata": {}, "outputs": [ { @@ -745,7 +751,7 @@ }, { "cell_type": "markdown", - "id": "7e11c500", + "id": "86c88b47", "metadata": {}, "source": [ "One can also load up the event and selection function data from a file:" @@ -754,7 +760,7 @@ { "cell_type": "code", "execution_count": 19, - "id": "96c86fd5", + "id": "9f1b8c10", "metadata": {}, "outputs": [], "source": [ @@ -770,7 +776,7 @@ }, { "cell_type": "markdown", - "id": "a1be31ef", + "id": "7b98306c", "metadata": {}, "source": [ "# Sampling\n", @@ -783,7 +789,7 @@ { "cell_type": "code", "execution_count": 22, - "id": "83ce1f8d", + "id": "8cd350ea", "metadata": {}, "outputs": [], "source": [ @@ -801,7 +807,7 @@ }, { "cell_type": "markdown", - "id": "3bd0c662", + "id": "4701fe32", "metadata": {}, "source": [ "Then, we can construct a `Sampler` object and put in our settings." @@ -810,7 +816,7 @@ { "cell_type": "code", "execution_count": 24, - "id": "23f28bb0", + "id": "e432d77c", "metadata": {}, "outputs": [], "source": [ @@ -824,7 +830,7 @@ }, { "cell_type": "markdown", - "id": "5e388f7b", + "id": "08a66057", "metadata": {}, "source": [ "and we can begin sampling" @@ -833,7 +839,7 @@ { "cell_type": "code", "execution_count": 25, - "id": "b7de385f", + "id": "251e1512", "metadata": {}, "outputs": [ { @@ -865,7 +871,7 @@ }, { "cell_type": "markdown", - "id": "3c5a1549", + "id": "6d494a74", "metadata": {}, "source": [ "we can see the dataframe holding the hyper-posterior samples in:" @@ -874,7 +880,7 @@ { "cell_type": "code", "execution_count": 28, - "id": "b26d5de3", + "id": "6c1c5497", "metadata": {}, "outputs": [ { @@ -1027,7 +1033,7 @@ }, { "cell_type": "markdown", - "id": "1f40581d", + "id": "c0db874e", "metadata": {}, "source": [ "and here is a corner plot of our result" @@ -1036,7 +1042,7 @@ { "cell_type": "code", "execution_count": 31, - "id": "f11e5421", + "id": "bed347c9", "metadata": {}, "outputs": [ {