-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathchapter4.html
More file actions
809 lines (790 loc) · 125 KB
/
chapter4.html
File metadata and controls
809 lines (790 loc) · 125 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Another Book on Data Science - Random Variables & Distributions</title>
<meta name="description" content="data science, R, Python, programming, machine learning">
<meta name="author" content="Nailong Zhang">
<!-- Le HTML5 shim, for IE6-8 support of HTML elements -->
<!--[if lt IE 9]>
<script src="https://html5shim.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->
<!-- Le styles -->
<link rel="stylesheet" href="bootstrap-1.1.0.min.css">
<link rel="stylesheet" href="style.css">
<link rel="stylesheet" href="small-screens.css">
<link rel="stylesheet" href="vs.css">
<link rel="stylesheet" href="code.css">
<link rel="stylesheet" href="application.css">
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-142297640-1', 'anotherbookondatascience.com');
ga('send', 'pageview');
</script>
</head>
<body>
<div class="topbar">
<div class="fill">
<div class="container-fluid fixed">
<h3><a href="index.html">Random Variables & Distributions</a></h3>
<ul class="nav secondary-nav">
<li><a href="chapter3.html">«Previous</a></li>
<li><a href="chapter5.html">Next»</a></li>
</ul>
</div>
</div>
</div>
<div class="container-fluid" style="padding-top: 60px;">
<p>Sections in this Chapter:</p>
<ul>
<li><a href="#refresher">A refresher on distributions</a></li>
<li><a href="#sampling">Inversion sampling & rejection sampling</a></li>
<li><a href="#copula">Joint distribution & copula</a></li>
<li><a href="#fit">Fit a distribution</a></li>
<li><a href="#ci">Confidence interval</a></li>
<li><a href="#ht">Hypothesis testing</a></li>
</ul>
<p>We will see a few topics related to random variables (r.v.) and distributions in this chapter.</p>
<h2 id="refresher">A refresher on distributions</h2>
<p>Distributions describe the random phenomenon in terms of probabilities. Distributions are connected to random variables. A random variable is specified by its distribution. We know a r.v. could be either discrete or continuous. The number of calls received by a call center is an example of discrete r.v., and the amount of time taken to read this book is a continuous random variable.</p>
<p>There are infinite number of distributions. Many important distributions fall into some distribution families (i.e., a parametric set of probability distributions of a certain form). For example, the multivariate normal distribution belongs to exponential distribution family<sup class="footnote" id="fnr1"><a href="#fn1">1</a></sup>.</p>
<p>Cumulative distribution function (<span class="caps">CDF</span>) specifies how a real-valued random variable $X$ is distributed, i.e.,</p>
<p>$$<br />
\begin{equation}\label{eq:4_0_1}<br />
F_X(x)=P(X\le x).<br />
\end{equation}<br />
$$</p>
<p>It is worth noting that <span class="caps">CDF</span> is always monotonic.</p>
<p>When the derivative of <span class="caps">CDF</span> exists, we call the derivative of $F$ as the probability density function (<span class="caps">PDF</span>) which is denoted by the lower case $f_X$. <span class="caps">PDF</span> is associated with continuous random variables. For discrete random variables, the counterpart of <span class="caps">PDF</span> is called probability mass function (<span class="caps">PMF</span>). <span class="caps">PMF</span> gives the probability that a random variable equal to some value, but <span class="caps">PDF</span> does not represent probabilities directly.</p>
<p>The quantile function is the inverse of <span class="caps">CDF</span>, i.e.,</p>
<p>$$<br />
\begin{equation}\label{eq:4_0_1_2}<br />
Q_X(p)=inf \{ x\in \boldsymbol{R}:p\le F_X(x) \}.<br />
\end{equation}<br />
$$</p>
<p><span class="caps">PDF</span>, <span class="caps">CDF</span> and Quantile functions are all heavily used in quantitative analysis. In R and Python, we can find a number of functions to evaluate these functions. The best known distribution should be univariate normal/Gaussian distribution. Let’s use Gaussian random variables for illustration.</p>
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="c1"># for reproducibility</span>
<span class="lineno"> 2 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 3 </span><span class="o">></span> <span class="c1"># draw samples</span>
<span class="lineno"> 4 </span><span class="o">></span> x <span class="o">=</span> rnorm<span class="p">(</span><span class="m">10</span><span class="p">,</span> mean<span class="o">=</span><span class="m">0</span><span class="p">,</span> sd<span class="o">=</span><span class="m">2</span><span class="p">)</span>
<span class="lineno"> 5 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>x<span class="p">)</span>
<span class="lineno"> 6 </span> <span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">2.7419169</span> <span class="m">-1.1293963</span> <span class="m">0.7262568</span> <span class="m">1.2657252</span> <span class="m">0.8085366</span>
<span class="lineno"> 7 </span> <span class="p">[</span><span class="m">6</span><span class="p">]</span> <span class="m">-0.2122490</span> <span class="m">3.0230440</span> <span class="m">-0.1893181</span> <span class="m">4.0368474</span> <span class="m">-0.1254282</span>
<span class="lineno"> 8 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span><span class="kp">mean</span><span class="p">(</span>x<span class="p">))</span>
<span class="lineno"> 9 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">1.094594</span>
<span class="lineno">10 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>sd<span class="p">(</span>x<span class="p">))</span>
<span class="lineno">11 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">1.670898</span>
<span class="lineno">12 </span><span class="o">></span> <span class="c1"># evaluate PDF</span>
<span class="lineno">13 </span><span class="o">></span> d <span class="o">=</span> dnorm<span class="p">(</span>x<span class="p">,</span> mean<span class="o">=</span><span class="m">0</span><span class="p">,</span> sd<span class="o">=</span><span class="m">2</span><span class="p">)</span>
<span class="lineno">14 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>d<span class="p">)</span>
<span class="lineno">15 </span> <span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">0.07793741</span> <span class="m">0.17007297</span> <span class="m">0.18674395</span> <span class="m">0.16327110</span> <span class="m">0.18381928</span>
<span class="lineno">16 </span> <span class="p">[</span><span class="m">6</span><span class="p">]</span> <span class="m">0.19835103</span> <span class="m">0.06364496</span> <span class="m">0.19857948</span> <span class="m">0.02601446</span> <span class="m">0.19907926</span>
<span class="lineno">17 </span><span class="o">></span> <span class="c1"># evalute CDF</span>
<span class="lineno">18 </span><span class="o">></span> p <span class="o">=</span> pnorm<span class="p">(</span>x<span class="p">,</span> mean<span class="o">=</span><span class="m">0</span><span class="p">,</span> sd<span class="o">=</span><span class="m">2</span><span class="p">)</span>
<span class="lineno">19 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>p<span class="p">)</span>
<span class="lineno">20 </span> <span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">0.9148060</span> <span class="m">0.2861395</span> <span class="m">0.6417455</span> <span class="m">0.7365883</span> <span class="m">0.6569923</span> <span class="m">0.4577418</span>
<span class="lineno">21 </span> <span class="p">[</span><span class="m">7</span><span class="p">]</span> <span class="m">0.9346722</span> <span class="m">0.4622928</span> <span class="m">0.9782264</span> <span class="m">0.4749971</span>
<span class="lineno">22 </span><span class="o">></span> <span class="c1"># evaluate quantile</span>
<span class="lineno">23 </span><span class="o">></span> q <span class="o">=</span> qnorm<span class="p">(</span>p<span class="p">,</span> mean<span class="o">=</span><span class="m">0</span><span class="p">,</span> sd<span class="o">=</span><span class="m">2</span><span class="p">)</span>
<span class="lineno">24 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span><span class="kp">q</span><span class="p">)</span>
<span class="lineno">25 </span> <span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">2.7419169</span> <span class="m">-1.1293963</span> <span class="m">0.7262568</span> <span class="m">1.2657252</span> <span class="m">0.8085366</span>
<span class="lineno">26 </span> <span class="p">[</span><span class="m">6</span><span class="p">]</span> <span class="m">-0.2122490</span> <span class="m">3.0230440</span> <span class="m">-0.1893181</span> <span class="m">4.0368474</span> <span class="m">-0.1254282</span></code></pre></figure><p>We see that <code>qnorm(pnorm(x))=x</code>.</p>
<p>In Python, we use the functions in <code>numpy</code> and <code>scipy.stats</code> for the same purpose.</p>
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="o">>>></span> <span class="kn">import</span> <span class="nn">scipy.stats</span>
<span class="lineno"> 2 </span><span class="o">>>></span> <span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 3 </span><span class="o">>>></span> <span class="kn">import</span> <span class="nn">scipy.stats</span>
<span class="lineno"> 4 </span><span class="o">>>></span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno"> 5 </span><span class="o">>>></span> <span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">normal</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">scale</span><span class="o">=</span><span class="mf">2.0</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span>
<span class="lineno"> 6 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="lineno"> 7 </span><span class="p">[</span> <span class="mf">0.99342831</span> <span class="o">-</span><span class="mf">0.2765286</span> <span class="mf">1.29537708</span> <span class="mf">3.04605971</span> <span class="o">-</span><span class="mf">0.46830675</span> <span class="o">-</span><span class="mf">0.46827391</span>
<span class="lineno"> 8 </span> <span class="mf">3.15842563</span> <span class="mf">1.53486946</span> <span class="o">-</span><span class="mf">0.93894877</span> <span class="mf">1.08512009</span><span class="p">]</span>
<span class="lineno"> 9 </span><span class="o">>>></span> <span class="n">norm</span> <span class="o">=</span> <span class="n">scipy</span><span class="o">.</span><span class="n">stats</span><span class="o">.</span><span class="n">norm</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">scale</span><span class="o">=</span><span class="mf">2.0</span><span class="p">)</span>
<span class="lineno">10 </span><span class="o">>>></span> <span class="n">d</span> <span class="o">=</span> <span class="n">norm</span><span class="o">.</span><span class="n">pdf</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="lineno">11 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">d</span><span class="p">)</span>
<span class="lineno">12 </span><span class="p">[</span><span class="mf">0.17632116</span> <span class="mf">0.19757358</span> <span class="mf">0.16172856</span> <span class="mf">0.06254333</span> <span class="mf">0.19407713</span> <span class="mf">0.19407788</span>
<span class="lineno">13 </span> <span class="mf">0.05732363</span> <span class="mf">0.1485901</span> <span class="mf">0.17865677</span> <span class="mf">0.17217026</span><span class="p">]</span>
<span class="lineno">14 </span><span class="o">>>></span> <span class="n">p</span> <span class="o">=</span> <span class="n">norm</span><span class="o">.</span><span class="n">cdf</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="lineno">15 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">p</span><span class="p">)</span>
<span class="lineno">16 </span><span class="p">[</span><span class="mf">0.69030468</span> <span class="mf">0.44501577</span> <span class="mf">0.74140679</span> <span class="mf">0.93612438</span> <span class="mf">0.40743296</span> <span class="mf">0.40743933</span>
<span class="lineno">17 </span> <span class="mf">0.94285637</span> <span class="mf">0.77858846</span> <span class="mf">0.31936529</span> <span class="mf">0.70628362</span><span class="p">]</span>
<span class="lineno">18 </span><span class="o">>>></span> <span class="n">q</span> <span class="o">=</span> <span class="n">norm</span><span class="o">.</span><span class="n">ppf</span><span class="p">(</span><span class="n">p</span><span class="p">)</span> <span class="c1"># ppf is the quantile function</span>
<span class="lineno">19 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">q</span><span class="p">)</span>
<span class="lineno">20 </span><span class="p">[</span> <span class="mf">0.99342831</span> <span class="o">-</span><span class="mf">0.2765286</span> <span class="mf">1.29537708</span> <span class="mf">3.04605971</span> <span class="o">-</span><span class="mf">0.46830675</span> <span class="o">-</span><span class="mf">0.46827391</span>
<span class="lineno">21 </span> <span class="mf">3.15842563</span> <span class="mf">1.53486946</span> <span class="o">-</span><span class="mf">0.93894877</span> <span class="mf">1.08512009</span><span class="p">]</span></code></pre></figure><p>A random variable could also be multivariate. In fact, the univariate normal distribution is a special case of multivariate normal distribution whose <span class="caps">PDF</span> is given in</p>
<p>$$<br />
\begin{equation}\label{eq:4_0_1_3}<br />
p(\boldsymbol{x};\boldsymbol{\mu},\boldsymbol{\Sigma})=\frac {1} {(2\pi)^{m/2}{|\boldsymbol{\Sigma|}}^{1/2}}\exp(-\frac 1 2 (\boldsymbol{x}-\boldsymbol{\mu})^T\boldsymbol{\Sigma}^{-1}(\boldsymbol{x}-\boldsymbol{\mu})),<br />
\end{equation}<br />
$$</p>
<p>where $\boldsymbol{\mu}$ is the mean and $\boldsymbol{\Sigma}$ is the covariance matrix of the random variable $\boldsymbol{x}$.</p>
<p>Sampling from distributions are involved in many algorithms, such as Monte Carlo simulation. First, let’s see a simple example in which we draw samples from a 3-dimensional normal distribution.</p>
<div class="codewrapper">
<div class="codeleft">
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kn">library</span><span class="p">(</span>mvtnorm<span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> mu <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span> <span class="m">2</span><span class="p">)</span>
<span class="lineno"> 3 </span><span class="o">></span> cov <span class="o">=</span> <span class="kt">array</span><span class="p">(</span><span class="kt">c</span><span class="p">(</span><span class="m">1.0</span><span class="p">,</span> <span class="m">0.5</span><span class="p">,</span> <span class="m">0.5</span><span class="p">,</span> <span class="m">2.0</span><span class="p">),</span> <span class="kt">c</span><span class="p">(</span><span class="m">2</span><span class="p">,</span> <span class="m">2</span><span class="p">))</span>
<span class="lineno"> 4 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 5 </span><span class="o">></span> x <span class="o">=</span> rmvnorm<span class="p">(</span>n <span class="o">=</span> <span class="m">1</span><span class="p">,</span> mean <span class="o">=</span> mu<span class="p">,</span> sigma <span class="o">=</span> cov<span class="p">)</span>
<span class="lineno"> 6 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>x<span class="p">)</span>
<span class="lineno"> 7 </span> <span class="p">[,</span><span class="m">1</span><span class="p">]</span> <span class="p">[,</span><span class="m">2</span><span class="p">]</span>
<span class="lineno"> 8 </span><span class="p">[</span><span class="m">1</span><span class="p">,]</span> <span class="m">2.221431</span> <span class="m">1.498778</span>
<span class="lineno"> 9 </span><span class="o">></span> d <span class="o">=</span> dmvnorm<span class="p">(</span>x<span class="p">,</span> mean <span class="o">=</span> mu<span class="p">,</span> sigma <span class="o">=</span> cov<span class="p">)</span>
<span class="lineno">10 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>d<span class="p">)</span>
<span class="lineno">11 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">0.04007949</span>
<span class="lineno">12 </span><span class="o">></span> p <span class="o">=</span> pmvnorm<span class="p">(</span>lower<span class="o">=-</span><span class="kc">Inf</span><span class="p">,</span> upper <span class="o">=</span> x<span class="p">[</span><span class="m">1</span><span class="p">,],</span> mean <span class="o">=</span> mu<span class="p">,</span> sigma <span class="o">=</span> cov<span class="p">)</span>
<span class="lineno">13 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>p<span class="p">)</span>
<span class="lineno">14 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">0.344201</span>
<span class="lineno">15 </span><span class="kp">attr</span><span class="p">(,</span><span class="s">"error"</span><span class="p">)</span>
<span class="lineno">16 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">1e-15</span>
<span class="lineno">17 </span><span class="kp">attr</span><span class="p">(,</span><span class="s">"msg"</span><span class="p">)</span>
<span class="lineno">18 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="s">"Normal Completion"</span></code></pre></figure></div>
<div class="coderight">
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">scipy.stats</span> <span class="k">import</span> <span class="n">multivariate_normal</span> <span class="k">as</span> <span class="n">mvn</span>
<span class="lineno"> 2 </span><span class="o">>>></span> <span class="n">mu</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">])</span>
<span class="lineno"> 3 </span><span class="o">>>></span> <span class="n">cov</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">],[</span><span class="mf">0.5</span><span class="p">,</span> <span class="mi">2</span><span class="p">]])</span>
<span class="lineno"> 4 </span><span class="o">>>></span> <span class="n">mvn</span><span class="o">.</span><span class="n">random_state</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno"> 5 </span><span class="o">>>></span> <span class="n">dist</span> <span class="o">=</span> <span class="n">mvn</span><span class="p">(</span><span class="n">mean</span> <span class="o">=</span> <span class="n">mu</span><span class="p">,</span> <span class="n">cov</span> <span class="o">=</span> <span class="n">cov</span><span class="p">)</span>
<span class="lineno"> 6 </span><span class="o">>>></span> <span class="n">x</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">rvs</span><span class="p">(</span><span class="n">size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="lineno"> 7 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="lineno"> 8 </span><span class="p">[</span><span class="mf">1.16865045</span> <span class="mf">2.72887797</span><span class="p">]</span>
<span class="lineno"> 9 </span><span class="o">>>></span> <span class="n">d</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">pdf</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="lineno">10 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">d</span><span class="p">)</span>
<span class="lineno">11 </span><span class="mf">0.10533537790438291</span>
<span class="lineno">12 </span><span class="o">>>></span> <span class="n">p</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">cdf</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="lineno">13 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">p</span><span class="p">)</span>
<span class="lineno">14 </span><span class="mf">0.44523665187828293</span></code></pre></figure></div>
</div>
<p>Please note in the example above, we do not calculate the quantiles. For multivariable distributions, the quantiles are not necessary to be fixed points.</p>
<h2 id="sampling">Inversion sampling & rejection sampling</h2>
<h3>Inversion sampling</h3>
<p>Sampling from the Gaussian or other famous distributions could be as simple as just calling a function. What if we want to draw samples from any distribution with its <span class="caps">CDF</span>? Inversion sampling is a generic solution. The key idea of inversion sampling is that $F_{X}(x)$ is always following a uniform distribution between $0$ and $1$. There are two steps to draw a sample with inversion sampling approach.</p>
<ul>
<li>draw independent and identical (i.i.d.) distribution random variables $u_1,…,u_n$ which are uniformly distributed on from 0 to 1;</li>
<li>calculate the quantiles $q_1,…,q_n$ for $u_1,…,u_n$ based on the <span class="caps">CDF</span>, and return $q_1,…,q_n$ as the desired samples to draw.</li>
</ul>
<p>Let’s see how to use the inversion sampling technique to sample from exponential distribution with <span class="caps">CDF</span> $f_X(x;\lambda) = 1-e^{-\lambda x}$.</p>
<div class="codewrapper">
<div class="codeleft">
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> n<span class="o">=</span><span class="m">1000</span>
<span class="lineno"> 2 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 3 </span><span class="o">></span> <span class="c1"># step 1, draw from uniform dist</span>
<span class="lineno"> 4 </span><span class="o">></span> u <span class="o">=</span> runif<span class="p">(</span>n<span class="o">=</span>n<span class="p">,</span> min<span class="o">=</span><span class="m">0</span><span class="p">,</span>max<span class="o">=</span><span class="m">1</span><span class="p">)</span>
<span class="lineno"> 5 </span><span class="o">></span> <span class="c1"># step 2, calculate the quantiles</span>
<span class="lineno"> 6 </span><span class="o">></span> lambda <span class="o">=</span> <span class="m">2.0</span>
<span class="lineno"> 7 </span><span class="o">></span> x <span class="o">=</span> qexp<span class="p">(</span>u<span class="p">,</span> rate <span class="o">=</span> lambda<span class="p">)</span>
<span class="lineno"> 8 </span><span class="o">></span> <span class="c1"># the mean is close to 1/lambda</span>
<span class="lineno"> 9 </span><span class="o">></span> <span class="kp">mean</span><span class="p">(</span>x<span class="p">)</span>
<span class="lineno">10 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">0.4818323</span></code></pre></figure></div>
<div class="coderight">
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="o">>>></span> <span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 2 </span><span class="o">>>></span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno"> 3 </span><span class="o">>>></span> <span class="n">n</span><span class="o">=</span><span class="mi">1000</span>
<span class="lineno"> 4 </span><span class="o">>>></span> <span class="c1"># step 1, generate from uniform(0,1)</span>
<span class="lineno"> 5 </span><span class="o">...</span> <span class="n">u</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="n">low</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">high</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="n">n</span><span class="p">)</span>
<span class="lineno"> 6 </span><span class="o">>>></span> <span class="c1"># step 2, calculate the quantile</span>
<span class="lineno"> 7 </span><span class="o">...</span> <span class="n">lamb</span> <span class="o">=</span> <span class="mf">2.0</span>
<span class="lineno"> 8 </span><span class="o">>>></span> <span class="n">q</span> <span class="o">=</span> <span class="o">-</span><span class="n">np</span><span class="o">.</span><span class="n">log</span><span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="n">u</span><span class="p">)</span><span class="o">/</span><span class="n">lamb</span>
<span class="lineno"> 9 </span><span class="o">>>></span> <span class="n">np</span><span class="o">.</span><span class="n">mean</span><span class="p">(</span><span class="n">q</span><span class="p">)</span>
<span class="lineno">10 </span><span class="mf">0.4862529739826122</span></code></pre></figure></div>
</div>
<p>In the above R implementation, we used the builtin quantile function in step 2; however, for many distributions there are no builtin quantile functions available and thus we need to specify the quantile function by ourselves (illustrated in the Python implementation).</p>
<h3>Rejection sampling</h3>
<p>Rejection sampling is also a basic algorithm to draw samples for a random variable $X$ given its <span class="caps">PDF</span> $f_X$. The basic idea of rejection sampling is to draw samples for a random variable $Y$ with <span class="caps">PDF</span> $f_Y$ and accept the samples with probability $f_X(x)/(Mf_Y(x))$. $M$ is selected such that $f_X(x)/(Mf_Y(x))\le1$. If the sample generated is rejected, the sampling procedure is repeated until an acceptance. More theoretical details of rejection sampling can be found from the wikipedia <sup class="footnote" id="fnr2"><a href="#fn2">2</a></sup>. The distribution $f_Y$ is called proposal distribution.</p>
<p>Let’s try to draw samples from an exponential distribution truncated between $0$ and $b$. The <span class="caps">PDF</span> of this random variable is specified by $f_X(x;\lambda, b) = \lambda e^{-\lambda x}/( 1-e^{-\lambda b})$.</p>
<p>A naive approach is to sample from the untruncated exponential distribution and only accept the samples smaller than $b$, which is implemented as follows.</p>
<div class="codewrapper">
<div class="codeleft">
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span>n<span class="o">=</span><span class="m">1000</span>
<span class="lineno"> 2 </span><span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 3 </span>lambda <span class="o">=</span> <span class="m">1.0</span>
<span class="lineno"> 4 </span>b <span class="o">=</span> <span class="m">2.0</span>
<span class="lineno"> 5 </span>x <span class="o">=</span> <span class="kp">rep</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> n<span class="p">)</span>
<span class="lineno"> 6 </span>i <span class="o">=</span> <span class="m">0</span>
<span class="lineno"> 7 </span><span class="kr">while</span> <span class="p">(</span>i<span class="o"><</span>n<span class="p">){</span>
<span class="lineno"> 8 </span> y <span class="o">=</span> rexp<span class="p">(</span>n <span class="o">=</span> <span class="m">1</span><span class="p">,</span> rate <span class="o">=</span> lambda<span class="p">)</span>
<span class="lineno"> 9 </span> <span class="kr">if</span> <span class="p">(</span>y<span class="o">>=</span>b<span class="p">)</span> <span class="p">{</span>
<span class="lineno">10 </span> i<span class="o">=</span>i<span class="m">+1</span>
<span class="lineno">11 </span> x<span class="p">[</span>i<span class="p">]</span> <span class="o">=</span> y
<span class="lineno">12 </span> <span class="p">}</span>
<span class="lineno">13 </span><span class="p">}</span></code></pre></figure></div>
<div class="coderight">
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 2 </span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno"> 3 </span><span class="n">lamb</span><span class="p">,</span> <span class="n">b</span> <span class="o">=</span> <span class="mf">1.0</span><span class="p">,</span> <span class="mf">2.0</span>
<span class="lineno"> 4 </span><span class="n">n</span><span class="o">=</span><span class="mi">1000</span>
<span class="lineno"> 5 </span><span class="n">x</span> <span class="o">=</span> <span class="p">[]</span>
<span class="lineno"> 6 </span><span class="n">i</span> <span class="o">=</span> <span class="mi">0</span>
<span class="lineno"> 7 </span><span class="k">while</span> <span class="n">i</span><span class="o"><</span><span class="n">n</span><span class="p">:</span>
<span class="lineno"> 8 </span> <span class="n">y</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">exponential</span><span class="p">(</span><span class="n">scale</span> <span class="o">=</span> <span class="n">lamb</span><span class="p">,</span> <span class="n">size</span> <span class="o">=</span> <span class="mi">1</span><span class="p">)</span>
<span class="lineno"> 9 </span> <span class="k">if</span> <span class="n">y</span><span class="o">>=</span><span class="n">b</span><span class="p">:</span>
<span class="lineno">10 </span> <span class="n">x</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">y</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
<span class="lineno">11 </span> <span class="n">i</span><span class="o">+=</span><span class="mi">1</span></code></pre></figure></div>
</div>
<p>After running the code snippets above, we have the samples stored in $x$ from the truncated exponential distribution.</p>
<p>Now let’s use the rejection sampling technique for this task. Since we want to sample the random variable between $0$ and $b$, one natural choice of the proposal distribution $f_Y$ is a uniform distribution between $0$ and $b$ and we choose $M=b\lambda/(1-e^{-\lambda b})$. As a result, the acceptance probability $f_X(x)/(Mf_Y(x))$ becomes $e^{-\lambda x}$.</p>
<div class="codewrapper">
<div class="codeleft">
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span>n<span class="o">=</span><span class="m">1000</span>
<span class="lineno"> 2 </span><span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 3 </span>lambda <span class="o">=</span> <span class="m">1.0</span>
<span class="lineno"> 4 </span>b <span class="o">=</span> <span class="m">2.0</span>
<span class="lineno"> 5 </span>x <span class="o">=</span> <span class="kp">rep</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> n<span class="p">)</span>
<span class="lineno"> 6 </span>i <span class="o">=</span> <span class="m">0</span>
<span class="lineno"> 7 </span><span class="kr">while</span> <span class="p">(</span>i<span class="o"><</span>n<span class="p">){</span>
<span class="lineno"> 8 </span> <span class="c1"># sample from the proposal distribution</span>
<span class="lineno"> 9 </span> y <span class="o">=</span> runif<span class="p">(</span><span class="m">1</span><span class="p">,</span> min <span class="o">=</span> <span class="m">0</span><span class="p">,</span> max <span class="o">=</span> b<span class="p">)</span>
<span class="lineno">10 </span> <span class="c1"># calculate the acceptance probability</span>
<span class="lineno">11 </span> p <span class="o">=</span> <span class="kp">exp</span><span class="p">(</span><span class="o">-</span>lambda<span class="o">*</span>y<span class="p">)</span>
<span class="lineno">12 </span> <span class="kr">if</span> <span class="p">(</span>runif<span class="p">(</span><span class="m">1</span><span class="p">)</span><span class="o"><=</span>p<span class="p">)</span> <span class="p">{</span>
<span class="lineno">13 </span> i<span class="o">=</span>i<span class="m">+1</span>
<span class="lineno">14 </span> x<span class="p">[</span>i<span class="p">]</span> <span class="o">=</span> y
<span class="lineno">15 </span> <span class="p">}</span>
<span class="lineno">16 </span><span class="p">}</span></code></pre></figure></div>
<div class="coderight">
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 2 </span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno"> 3 </span><span class="n">lamb</span><span class="p">,</span> <span class="n">b</span> <span class="o">=</span> <span class="mf">1.0</span><span class="p">,</span> <span class="mf">2.0</span>
<span class="lineno"> 4 </span><span class="n">n</span> <span class="o">=</span> <span class="mi">1000</span>
<span class="lineno"> 5 </span><span class="n">x</span> <span class="o">=</span> <span class="p">[]</span>
<span class="lineno"> 6 </span><span class="n">i</span> <span class="o">=</span> <span class="mi">0</span>
<span class="lineno"> 7 </span><span class="k">while</span> <span class="n">i</span> <span class="o"><</span> <span class="n">n</span><span class="p">:</span>
<span class="lineno"> 8 </span> <span class="c1"># sample from the proposal distribution</span>
<span class="lineno"> 9 </span> <span class="n">y</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="n">low</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">high</span><span class="o">=</span><span class="n">b</span><span class="p">)</span>
<span class="lineno">10 </span> <span class="c1"># calculate the acceptance probability</span>
<span class="lineno">11 </span> <span class="n">p</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">exp</span><span class="p">(</span><span class="o">-</span><span class="n">lamb</span><span class="o">*</span><span class="n">y</span><span class="p">)</span>
<span class="lineno">12 </span> <span class="k">if</span> <span class="n">p</span> <span class="o">>=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">():</span>
<span class="lineno">13 </span> <span class="n">x</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">y</span><span class="p">)</span>
<span class="lineno">14 </span> <span class="n">i</span> <span class="o">+=</span> <span class="mi">1</span></code></pre></figure></div>
</div>
<p>We have seen the basic examples on how to draw samples with inversion samples and truncated samples. Now let’s work on a more challenging problem.</p>
<appname>Application – Draw samples from a sphere</appname>
<blockquote class="appquote">
<p>
<p>Without loss of generality, let’s consider a unit sphere, i.e., the radius $r=1$. We want to draw i.i.d. points from a unit sphere. The problem appears simple at a first glance – we could utilize the spherical coordinates system and draw samples for $\phi$ and $\theta$. Now the question is how to sample for $\phi$ and $\theta$. A straightforward idea is to draw independent and uniform samples $\phi$ from $0$ to $2\pi$ and $\theta$ from $0$ to $\pi$, respectively. However, this idea is incorrect which will be analyzed below.</p>
</p>
</blockquote>
<figure class="text-center">
<img src="figures/sphere.png" alt="" style="display: block;margin: auto;" width="40%">
<figcaption class="centerfigcaption"></figcaption>
</figure>
<p>Let’s use $f_P(\phi,\theta)$ to denote the <span class="caps">PDF</span> of the joint distribution of $(\phi,\theta)$. We integrate this <span class="caps">PDF</span>, then</p>
<p>$$<br />
\begin{equation}\label{eq:4_0_2}<br />
1 = \int_{0}^{2\pi} \int_{0}^{\pi} f_P(\phi,\theta) d\phi d\theta = \int_{0}^{2\pi} \int_{0}^{\pi} f_\Phi(\phi) f_{\Theta|\Phi}(\theta|\phi) d\phi d\theta.<br />
\end{equation}<br />
$$</p>
<p>If we enforce $\Phi$ has a uniform distribution between $0$ and $2\pi$, then $f_\Phi(\phi)=1/{2\pi}$, and</p>
<p>$$<br />
\begin{equation}\label{eq:4_0_3}<br />
1=\int_{0}^{\pi} f_{\Theta|\Phi}(\theta|\phi) d\theta.<br />
\end{equation}<br />
$$</p>
<p>One solution to \eqref{eq:4_0_3} is $f_{\Theta|\Phi}(\theta|\phi)=sin(\theta)/2$.</p>
<p>Thus, we could generate the samples of $\Phi$ from the uniform distribution and the samples of $\Theta$ from the distribution whose <span class="caps">PDF</span> is $sin(\phi)/2$. Sampling for $\Phi$ is trivial, but how about $\Theta$? We could use the inversion sampling technique. The <span class="caps">CDF</span> of $\Theta$ is $(1-cos(\theta))/2;0\le\theta\le \pi$, and the quantile function is $Q_\Theta(p)=arccos(1-2p)$.</p>
<p>The implementation of sampling from unit sphere is implemented below.</p>
<div class="codewrapper">
<div class="codeleft">
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span>n<span class="o">=</span><span class="m">2000</span>
<span class="lineno"> 2 </span><span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 3 </span><span class="c1"># sample phi</span>
<span class="lineno"> 4 </span>phi <span class="o">=</span> runif<span class="p">(</span>n<span class="p">,</span> min <span class="o">=</span> <span class="m">0</span><span class="p">,</span> max <span class="o">=</span> <span class="m">2</span><span class="o">*</span><span class="kc">pi</span><span class="p">)</span>
<span class="lineno"> 5 </span><span class="c1"># sample theta by inversion cdf</span>
<span class="lineno"> 6 </span>u <span class="o">=</span> runif<span class="p">(</span>n<span class="p">,</span> min <span class="o">=</span> <span class="m">0</span><span class="p">,</span> max <span class="o">=</span> <span class="m">1</span><span class="p">)</span>
<span class="lineno"> 7 </span>theta <span class="o">=</span> <span class="kp">acos</span><span class="p">(</span><span class="m">1-2</span><span class="o">*</span>u<span class="p">)</span>
<span class="lineno"> 8 </span><span class="c1"># now we calculate the Cartesian coordinates</span>
<span class="lineno"> 9 </span>x <span class="o">=</span> <span class="kp">sin</span><span class="p">(</span>theta<span class="p">)</span><span class="o">*</span><span class="kp">cos</span><span class="p">(</span>phi<span class="p">)</span>
<span class="lineno">10 </span>y <span class="o">=</span> <span class="kp">sin</span><span class="p">(</span>theta<span class="p">)</span><span class="o">*</span><span class="kp">sin</span><span class="p">(</span>phi<span class="p">)</span>
<span class="lineno">11 </span>z <span class="o">=</span> <span class="kp">cos</span><span class="p">(</span>theta<span class="p">)</span></code></pre></figure></div>
<div class="coderight">
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 2 </span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno"> 3 </span><span class="n">n</span> <span class="o">=</span> <span class="mi">2000</span>
<span class="lineno"> 4 </span><span class="c1"># sample phi</span>
<span class="lineno"> 5 </span><span class="n">phi</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="n">low</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">high</span><span class="o">=</span><span class="mi">2</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">pi</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="n">n</span><span class="p">)</span>
<span class="lineno"> 6 </span><span class="c1"># sample theta by inversion cdf</span>
<span class="lineno"> 7 </span><span class="n">u</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="n">low</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">high</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="n">n</span><span class="p">)</span>
<span class="lineno"> 8 </span><span class="n">theta</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">arccos</span><span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">u</span><span class="p">)</span>
<span class="lineno"> 9 </span><span class="c1"># we calculate the Cartesian coordinates</span>
<span class="lineno">10 </span><span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">sin</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">cos</span><span class="p">(</span><span class="n">phi</span><span class="p">)</span>
<span class="lineno">11 </span><span class="n">y</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">sin</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">sin</span><span class="p">(</span><span class="n">phi</span><span class="p">)</span>
<span class="lineno">12 </span><span class="n">z</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">cos</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span></code></pre></figure></div>
</div>
<figure class="text-center">
<img src="figures/sphere2.png" alt="Uniform samples from a unit sphere" style="display: block;margin: auto;" width="65%">
<figcaption class="centerfigcaption">Uniform samples from a unit sphere</figcaption>
</figure>
<p>There are also other solutions to this problem, which wouldn’t be discussed in this book. A related problem is to draw samples inside a sphere. We could solve the inside sphere sampling problem with a similar approach, or using rejection sampling approach, i.e., sampling from a cube with acceptance ratio $\pi/6$.</p>
<h2 id="copula">Joint distribution & copula</h2>
<p>We are not trying to introduce these concepts from scratch. This section is more like a recap.</p>
<p>In previous section, we see the <span class="caps">PDF</span> for multivariate normal distribution in \eqref{eq:4_0_1_3}. A multivariate distribution is also called joint distribution, since the multivariate random variable can be viewed as a joint of multiple univariate random variables. Joint <span class="caps">PDF</span> gives the probability density of a set of random variables. Sometimes we may only be interested in the probability distribution of a single random variable from a set. And that distribution is called marginal distribution. The <span class="caps">PDF</span> of a marginal distribution can be obtained by integrating the joint <span class="caps">PDF</span> over all the other random variables. For example, the integral of \eqref{eq:4_0_1_3} gives the <span class="caps">PDF</span> of a univariate normal distribution.</p>
<p>The joint distribution is the distribution about the whole population. In the context of a bivariate Gaussian random variable $(X_1,X_2)$, the joint <span class="caps">PDF</span> $f_{X_1,X_2}(x_1,x_2)$ specifies the probability density for all pairs of $(X_1,X_2)$ in the 2-dimension plane. The marginal distribution of $X_1$ is still about the whole population because we are not ruling out any points from the support of the distribution function. Sometimes we are interested in a subpopulation only, for example, the subset of $(X_1,X_2)$ where $X_2=2$ or $X_2>5$. We can use conditional distribution to describe the probability distribution of a subpopulation. To denote conditional distribution, the symbol $|$ is frequently used. We use $f_{X_1|X_2=0}(x_1|x_2)$ to represent the distribution of $X_1$ conditional on $X_2=0$. By the rule of conditional probability $P(A|B)=P(A,B)/P(B)$, the calculation $f_{X_1|X_2}(x_1|x_2)$ is straightforward, i.e., $f_{X_1|X_2}(x_1|x_2)=f_{X_1,X_2}(x_1,x_2)/f_{X_2}(x_2)$.</p>
<p>The most well-known joint distribution is the multivariate Gaussian distribution. Multivariate Gaussian distribution has many important and useful properties. For example, given the observation of $(X_1,…,X_k)$ from $(X_1,…,X_m)$, $(X_k+1,…,X_m)$ is still following a multivariate Gaussian distribution, which is essential to Gaussian process regression<sup class="footnote" id="fnr3"><a href="#fn3">3</a></sup>.</p>
<p>We have seen the extension from univariate Gaussian distribution to multivariate Gaussian distribution, but how about other distributions? For example, what is the joint distribution for two univariate exponential distribution? We could use copula<sup class="footnote" id="fnr4"><a href="#fn4">4</a></sup> for such purpose. For the random variable $(X_1,…,X_m)$, let $(U_1,…,U_m)=(F_{X_1}(X_1),…,F_{X_m}(X_m))$ where $F_{X_k}$ is the <span class="caps">CDF</span> of $X_k$. We know $U_k$ is following a uniform distribution. Let $C(U_1,…,U_m)$ denote the joint <span class="caps">CDF</span> of $(U_1,…,U_m)$ and the <span class="caps">CDF</span> is called copula.</p>
<p>There are different copula functions, and one commonly-used is the Gaussian copula. The standard Gaussian copula is specified as below.</p>
<p>$$<br />
\begin{equation}\label{eq:4_0_3_0}<br />
C^{Gauss}_{\Sigma}(u_1,…,u_m)=\Phi_{\Sigma}(\Phi^{-1}(u_1),…,\Phi^{-1}(u_m)),<br />
\end{equation}<br />
$$</p>
<p>where $\Phi$ denotes the <span class="caps">CDF</span> of the standard Gaussian distribution, and $\Phi_{\Sigma}$ denotes the <span class="caps">CDF</span> of a multivariate Gaussian distribution with mean $\boldsymbol{0}$ and correlation matrix $\Sigma$.</p>
<p>Let’s see an example to draw samples from a bivariate exponential distribution constructed via Gaussian copula. The basic idea of sampling multivariate random variables via copula is to sample $U_1,…,U_m$ first and then transform it to the desired random variables.</p>
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kn">library</span><span class="p">(</span>mvtnorm<span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> n <span class="o">=</span> <span class="m">10000</span>
<span class="lineno"> 3 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 4 </span><span class="o">></span> rates <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="m">1.0</span><span class="p">,</span> <span class="m">2.0</span><span class="p">)</span>
<span class="lineno"> 5 </span><span class="o">></span> mu <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="m">0</span><span class="p">)</span>
<span class="lineno"> 6 </span><span class="o">></span> rho <span class="o">=</span> <span class="m">0.6</span>
<span class="lineno"> 7 </span><span class="o">></span> correlation <span class="o">=</span> <span class="kt">array</span><span class="p">(</span><span class="kt">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span> rho<span class="p">,</span> rho<span class="p">,</span> <span class="m">1</span><span class="p">),</span> <span class="kt">c</span><span class="p">(</span><span class="m">2</span><span class="p">,</span> <span class="m">2</span><span class="p">))</span>
<span class="lineno"> 8 </span><span class="o">></span> <span class="c1"># sample R from multivariate Gaussian</span>
<span class="lineno"> 9 </span><span class="o">></span> r <span class="o">=</span> rmvnorm<span class="p">(</span>n <span class="o">=</span> n<span class="p">,</span> mean <span class="o">=</span> mu<span class="p">,</span> sigma <span class="o">=</span> correlation<span class="p">)</span>
<span class="lineno">10 </span><span class="o">></span> <span class="c1"># generate U</span>
<span class="lineno">11 </span><span class="o">></span> u <span class="o">=</span> pnorm<span class="p">(</span>r<span class="p">)</span>
<span class="lineno">12 </span><span class="o">></span> <span class="c1"># calculate the quantile for X</span>
<span class="lineno">13 </span><span class="o">></span> x1 <span class="o">=</span> qexp<span class="p">(</span>u<span class="p">[,</span> <span class="m">1</span><span class="p">],</span> rate <span class="o">=</span> rates<span class="p">[</span><span class="m">1</span><span class="p">])</span>
<span class="lineno">14 </span><span class="o">></span> x2 <span class="o">=</span> qexp<span class="p">(</span>u<span class="p">[,</span> <span class="m">2</span><span class="p">],</span> rate <span class="o">=</span> rates<span class="p">[</span><span class="m">2</span><span class="p">])</span>
<span class="lineno">15 </span><span class="o">></span> x <span class="o">=</span> <span class="kp">cbind</span><span class="p">(</span>x1<span class="p">,</span> x2<span class="p">)</span>
<span class="lineno">16 </span><span class="o">></span> cor<span class="p">(</span>x<span class="p">)</span>
<span class="lineno">17 </span> x1 x2
<span class="lineno">18 </span>x1 <span class="m">1.0000000</span> <span class="m">0.5476137</span>
<span class="lineno">19 </span>x2 <span class="m">0.5476137</span> <span class="m">1.0000000</span>
<span class="lineno">20 </span><span class="o">></span> <span class="kp">apply</span><span class="p">(</span>x<span class="p">,</span> <span class="kp">mean</span><span class="p">,</span> MARGIN <span class="o">=</span> <span class="m">2</span><span class="p">)</span>
<span class="lineno">21 </span> x1 x2
<span class="lineno">22 </span><span class="m">0.9934398</span> <span class="m">0.4990758</span></code></pre></figure><language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="o">>>></span> <span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 2 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">scipy.stats</span> <span class="k">import</span> <span class="n">multivariate_normal</span> <span class="k">as</span> <span class="n">mvn</span>
<span class="lineno"> 3 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">scipy.stats</span> <span class="k">import</span> <span class="n">norm</span>
<span class="lineno"> 4 </span><span class="o">>>></span> <span class="n">mu</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">])</span>
<span class="lineno"> 5 </span><span class="o">>>></span> <span class="n">rho</span> <span class="o">=</span> <span class="mf">0.6</span>
<span class="lineno"> 6 </span><span class="o">>>></span> <span class="n">cov</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span> <span class="n">rho</span><span class="p">],[</span><span class="n">rho</span><span class="p">,</span> <span class="mi">1</span><span class="p">]])</span>
<span class="lineno"> 7 </span><span class="o">>>></span> <span class="n">mvn</span><span class="o">.</span><span class="n">random_state</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno"> 8 </span><span class="o">>>></span> <span class="n">n</span> <span class="o">=</span> <span class="mi">10000</span>
<span class="lineno"> 9 </span><span class="o">>>></span> <span class="n">uvn</span> <span class="o">=</span> <span class="n">norm</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">scale</span><span class="o">=</span><span class="mf">1.0</span><span class="p">)</span>
<span class="lineno">10 </span><span class="o">>>></span> <span class="n">dist</span> <span class="o">=</span> <span class="n">mvn</span><span class="p">(</span><span class="n">mean</span> <span class="o">=</span> <span class="n">mu</span><span class="p">,</span> <span class="n">cov</span> <span class="o">=</span> <span class="n">cov</span><span class="p">)</span>
<span class="lineno">11 </span><span class="o">>>></span> <span class="n">rates</span> <span class="o">=</span> <span class="p">[</span><span class="mf">1.0</span><span class="p">,</span> <span class="mf">2.0</span><span class="p">]</span>
<span class="lineno">12 </span><span class="o">>>></span> <span class="c1"># sample R from multivariate Gaussian</span>
<span class="lineno">13 </span><span class="o">...</span> <span class="n">r</span> <span class="o">=</span> <span class="n">dist</span><span class="o">.</span><span class="n">rvs</span><span class="p">(</span><span class="n">size</span><span class="o">=</span><span class="n">n</span><span class="p">)</span>
<span class="lineno">14 </span><span class="o">>>></span> <span class="c1"># generate U</span>
<span class="lineno">15 </span><span class="o">...</span> <span class="n">u</span> <span class="o">=</span> <span class="n">uvn</span><span class="o">.</span><span class="n">cdf</span><span class="p">(</span><span class="n">r</span><span class="p">)</span>
<span class="lineno">16 </span><span class="o">>>></span> <span class="c1"># calculate the quantile for X</span>
<span class="lineno">17 </span><span class="o">...</span> <span class="k">def</span> <span class="nf">qexp</span><span class="p">(</span><span class="n">u</span><span class="p">,</span> <span class="n">rate</span><span class="p">):</span> <span class="k">return</span> <span class="o">-</span><span class="n">np</span><span class="o">.</span><span class="n">log</span><span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="n">u</span><span class="p">)</span><span class="o">/</span><span class="n">rate</span>
<span class="lineno">18 </span><span class="o">...</span>
<span class="lineno">19 </span><span class="o">>>></span> <span class="n">x1</span> <span class="o">=</span> <span class="n">qexp</span><span class="p">(</span><span class="n">u</span><span class="p">[:,</span> <span class="mi">0</span><span class="p">],</span> <span class="n">rate</span> <span class="o">=</span> <span class="n">rates</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
<span class="lineno">20 </span><span class="o">>>></span> <span class="n">x2</span> <span class="o">=</span> <span class="n">qexp</span><span class="p">(</span><span class="n">u</span><span class="p">[:,</span> <span class="mi">1</span><span class="p">],</span> <span class="n">rate</span> <span class="o">=</span> <span class="n">rates</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="lineno">21 </span><span class="o">>>></span> <span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">column_stack</span><span class="p">((</span><span class="n">x1</span><span class="p">,</span> <span class="n">x2</span><span class="p">))</span>
<span class="lineno">22 </span><span class="o">>>></span> <span class="n">np</span><span class="o">.</span><span class="n">corrcoef</span><span class="p">(</span><span class="n">x</span><span class="o">.</span><span class="n">T</span><span class="p">)</span>
<span class="lineno">23 </span><span class="n">array</span><span class="p">([[</span><span class="mf">1.</span> <span class="p">,</span> <span class="mf">0.57359023</span><span class="p">],</span>
<span class="lineno">24 </span> <span class="p">[</span><span class="mf">0.57359023</span><span class="p">,</span> <span class="mf">1.</span> <span class="p">]])</span>
<span class="lineno">25 </span><span class="o">>>></span> <span class="n">np</span><span class="o">.</span><span class="n">mean</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span>
<span class="lineno">26 </span><span class="n">array</span><span class="p">([</span><span class="mf">0.99768825</span><span class="p">,</span> <span class="mf">0.50319692</span><span class="p">])</span></code></pre></figure><p>We plot 2000 samples generated from the bivariate exponential distribution constructed via copula in the figure below.</p>
<figure class="text-center">
<img src="figures/copula.png" alt="Samples from a bivariate exponential distribution constructed via Gaussian copula" style="display: block;margin: auto;" width="50%">
<figcaption class="centerfigcaption">Samples from a bivariate exponential distribution constructed via Gaussian copula</figcaption>
</figure>
<p>With the help of copula, we can even construct joint distribution with marginals from different distributions. For example, let’s make a joint distribution of a uniform distributed random variable and an exponential distributed random variable.</p>
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kn">library</span><span class="p">(</span>mvtnorm<span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> n <span class="o">=</span> <span class="m">10000</span>
<span class="lineno"> 3 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 4 </span><span class="o">></span> mu <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="m">0</span><span class="p">)</span>
<span class="lineno"> 5 </span><span class="o">></span> rho <span class="o">=</span> <span class="m">0.6</span>
<span class="lineno"> 6 </span><span class="o">></span> correlation <span class="o">=</span> <span class="kt">array</span><span class="p">(</span><span class="kt">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span> rho<span class="p">,</span> rho<span class="p">,</span> <span class="m">1</span><span class="p">),</span> <span class="kt">c</span><span class="p">(</span><span class="m">2</span><span class="p">,</span> <span class="m">2</span><span class="p">))</span>
<span class="lineno"> 7 </span><span class="o">></span> <span class="c1"># sample R from multivariate Gaussian</span>
<span class="lineno"> 8 </span><span class="o">></span> r <span class="o">=</span> rmvnorm<span class="p">(</span>n <span class="o">=</span> n<span class="p">,</span> mean <span class="o">=</span> mu<span class="p">,</span> sigma <span class="o">=</span> correlation<span class="p">)</span>
<span class="lineno"> 9 </span><span class="o">></span> <span class="c1"># generate U</span>
<span class="lineno">10 </span><span class="o">></span> u <span class="o">=</span> pnorm<span class="p">(</span>r<span class="p">)</span>
<span class="lineno">11 </span><span class="o">></span> <span class="c1"># calculate the quantile for X</span>
<span class="lineno">12 </span><span class="o">></span> <span class="c1"># X1 ~ exp(1.0); X2 ~ unif(0, 2)</span>
<span class="lineno">13 </span><span class="o">></span> x1 <span class="o">=</span> qexp<span class="p">(</span>u<span class="p">[,</span> <span class="m">1</span><span class="p">],</span> rate <span class="o">=</span> <span class="m">1.0</span><span class="p">)</span>
<span class="lineno">14 </span><span class="o">></span> x2 <span class="o">=</span> qunif<span class="p">(</span>u<span class="p">[,</span> <span class="m">2</span><span class="p">],</span> min <span class="o">=</span> <span class="m">0</span><span class="p">,</span> max <span class="o">=</span> <span class="m">2</span><span class="p">)</span>
<span class="lineno">15 </span><span class="o">></span> cor<span class="p">(</span>x1<span class="p">,</span> x2<span class="p">)</span>
<span class="lineno">16 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">0.5220363</span></code></pre></figure><figure class="text-center">
<img src="figures/copula_2.png" alt="Samples from a joint distribution of a uniform marginal and an exponential marginal" style="display: block;margin: auto;" width="50%">
<figcaption class="centerfigcaption">Samples from a joint distribution of a uniform marginal and an exponential marginal</figcaption>
</figure>
<h2 id="fit">Fit a distribution</h2>
<p>Statistics is used to solved real-world problems with data. In many cases we may have a collection of observations for a random variable and want to know the distribution which the observations follow. In fact, there are two questions involved in the process of fitting a distribution. First, which distribution to fit? And second, given the distribution how to estimate the parameters. These two questions are essentially the same questions that we have to answer in supervised learning. In supervised learning, we need to choose a model and estimate the parameters (if the model has parameters). We can also call these two questions as model selection and model fitting. Usually, model selection is done based on the model fitting.</p>
<p>Two widely-used methods in distribution fitting – method of moments and maximum likelihood method. In this Section we will see the method of moments. The maximum likelihood method would be introduced in Chapter 6. The $k^{th}$ moment of a random variable is defined as $\mu_k=E(x^k)$. If there are $m$ parameters, usually we derive the first $m$ theoretical moments in terms of the parameters, and by equating these theoretical moments to the sample moments $\hat{\mu_k}=1/n\sum_1^n{x_i^k}$ we will get the estimate.</p>
<p>Let’s take the univariate Gaussian distribution as an example. We want to estimate the mean $\mu$ and variance $\sigma^2$. The first and second theoretical moments is $\mu$ and $\mu^2+\sigma^2$. Thus, the estimate $\hat\mu$ and $\hat{\sigma}^2$ are $1/n\sum_1^n{x_i}$ and $1/n\sum_1^n{x_i^2}-(1/n\sum_1^n{x_i})^2=1/n\sum_1^n{(x_i-\hat\mu)^2}$. The code snippets below show the implementation.</p>
<div class="codewrapper">
<div class="codeleft">
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> n <span class="o">=</span> <span class="m">1000</span>
<span class="lineno"> 3 </span><span class="o">></span> mu <span class="o">=</span> <span class="m">2.5</span>
<span class="lineno"> 4 </span><span class="o">></span> var <span class="o">=</span> <span class="m">1.6</span>
<span class="lineno"> 5 </span><span class="o">></span> x <span class="o">=</span> rnorm<span class="p">(</span>n<span class="p">,</span> mu<span class="p">,</span> <span class="kp">sqrt</span><span class="p">(</span>var<span class="p">))</span>
<span class="lineno"> 6 </span><span class="o">></span> mu_est <span class="o">=</span> <span class="kp">mean</span><span class="p">(</span>x<span class="p">)</span>
<span class="lineno"> 7 </span><span class="o">></span> var_est <span class="o">=</span> <span class="kp">mean</span><span class="p">((</span>x<span class="o">-</span>mu_est<span class="p">)</span><span class="o">^</span><span class="m">2</span><span class="p">)</span>
<span class="lineno"> 8 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>mu_est<span class="p">)</span>
<span class="lineno"> 9 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">2.467334</span>
<span class="lineno">10 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>var_est<span class="p">)</span>
<span class="lineno">11 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">1.60647</span></code></pre></figure></div>
<div class="coderight">
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="o">>>></span> <span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 2 </span><span class="o">>>></span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno"> 3 </span><span class="o">>>></span> <span class="n">n</span> <span class="o">=</span> <span class="mi">1000</span>
<span class="lineno"> 4 </span><span class="o">>>></span> <span class="n">mu</span><span class="p">,</span> <span class="n">var</span> <span class="o">=</span> <span class="mf">2.5</span><span class="p">,</span> <span class="mf">1.6</span>
<span class="lineno"> 5 </span><span class="o">>>></span> <span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">normal</span><span class="p">(</span><span class="mf">2.5</span><span class="p">,</span> <span class="mf">1.6</span><span class="o">**</span><span class="mf">0.5</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="n">n</span><span class="p">)</span>
<span class="lineno"> 6 </span><span class="o">>>></span> <span class="n">mu_est</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">mean</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="lineno"> 7 </span><span class="o">>>></span> <span class="n">var_est</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">mean</span><span class="p">((</span><span class="n">x</span><span class="o">-</span><span class="n">mu_est</span><span class="p">)</span><span class="o">**</span><span class="mi">2</span><span class="p">)</span>
<span class="lineno"> 8 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">mu_est</span><span class="p">)</span>
<span class="lineno"> 9 </span><span class="mf">2.524453331300827</span>
<span class="lineno">10 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">var_est</span><span class="p">)</span>
<span class="lineno">11 </span><span class="mf">1.5326479835704276</span></code></pre></figure></div>
</div>
<p>We could also fit another distribution to the data generated from a normal distribution. But which one is better? One answer is to compare the likelihood functions evaluated at the fitted parameters and choose the one that gives the larger likelihood value.</p>
<p>Please note that different methods to fit a distribution may lead to different parameter estimates. For example, the estimate of population variance using maximum likelihood method is different from that using moments method. Actually, the estimator for population mean is biased using methods method but the estimator using maximum likelihood method is unbiased.</p>
<h2 id="ci">Confidence interval</h2>
<p>In the previous Section we have seen the parameter estimation of a distribution. However, the parameter estimates from either the method of moments or the maximum likelihood estimation are not the exact values of the unknown parameters. There are uncertainties associated with distribution fitting because the data to which the distribution is fit usually is just a random sample rather than a population. Suppose we could repeat the distribution fitting process $m$ times and each time we collect a random sample of size $n$ (i.e., a collection of $n$ observations for the random variable of interest), then we will get $n$ estimates of the distribution parameters. Which estimate is the best to use? In fact, all these $n$ estimates are observations of the estimator random variable. Estimator is a function of the random sample, and itself is a random variable. For example, eq1 is an estimator for the $\mu$ parameter in a Gaussian distribution.</p>
<h3>Central limit theorem</h3>
<p>Now we know when we fit a distribution, the parameter estimates are not the exact values of the unknown parameters. The question is how to quantify the uncertainties? To answer this question, we better know the distribution of the estimator. In the example of the Gaussian distribution, what distribution does the estimator $\hat\mu$ follow? It is straightforward to see that estimator is still following a Gaussian distribution since each $X_i$ is following a Gaussian distribution (sum of independent Gaussian random variables still follow a Gaussian distribution). But what if $X_i$ is from an arbitrary distribution? We may still derive an exact distribution of the sample mean $\hat{\mu}$ for an arbitrary distribution, but sometimes the derivation may not be easy. A simple idea is to use the central limit theorem (<span class="caps">CLT</span>), which states that the distribution of the mean of a random sample from a population with finite variance is approximately normally distributed when the sample size is large enough. More specifically, $\sqrt{n}(\bar{X}-\mu)/\sigma\xrightarrow{d} N(0,1)$ where $\mu$ and $\sigma$ are the population mean and standard deviation, respectively. Sometimes we do not have the actual value of the population standard deviation and the sample standard deviation $S$ can be used instead and thus $\sqrt{n}(\bar{X}-\mu)/S\xrightarrow{d} N(0,1)$.</p>
<p>We know if $Z$ is a Gaussian random variable, $P(- Z_{(1+\alpha)/2} < Z\le Z_{(1+\alpha)/2}) = \alpha$ where $Z_{u}$ denotes the quantile of $u$.<br />
By <span class="caps">CLT</span>, $P(- Z_{(1+\alpha)/2} \le \sqrt{n}(\bar{X}-\mu)/S \le Z_{(1+\alpha)/2}) = \alpha$, which further leads to $P(\bar{X}- Z_{(1+\alpha)/2}S/\sqrt{n} \le \mu \le \bar{X}+ Z_{(1+\alpha)/2}S/\sqrt{n}) = \alpha$. The interval $\bar{X} \pm Z_{(1+\alpha)/2}S/\sqrt{n}$ is called an $\alpha$ confidence interval (CI) for the population mean $\mu$. For example, since $Z_{(1+0.95)/2}=1.96$ the 95% CI is constructed as $\bar{X} \pm 1.96S/\sqrt{n}$. Of course, if we know the exact value of $\sigma$, we could use $\bar{X} \pm 1.96\sigma/\sqrt{n}$ instead.</p>
<p>We show an example for the confidence interval calculation of population mean in normal distribution.<br />
<language>R</language><br />
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> n <span class="o">=</span> <span class="m">1000</span>
<span class="lineno"> 3 </span><span class="o">></span> mu <span class="o">=</span> <span class="m">2.5</span>
<span class="lineno"> 4 </span><span class="o">></span> var <span class="o">=</span> <span class="m">1.6</span>
<span class="lineno"> 5 </span><span class="o">></span> x <span class="o">=</span> rnorm<span class="p">(</span>n<span class="p">,</span> mu<span class="p">,</span> <span class="kp">sqrt</span><span class="p">(</span>var<span class="p">))</span>
<span class="lineno"> 6 </span><span class="o">></span> mu_est <span class="o">=</span> <span class="kp">mean</span><span class="p">(</span>x<span class="p">)</span>
<span class="lineno"> 7 </span><span class="o">></span> <span class="c1"># we can also calculate S with sd(x) in R</span>
<span class="lineno"> 8 </span><span class="o">></span> S <span class="o">=</span> <span class="kp">sqrt</span><span class="p">(</span><span class="kp">sum</span><span class="p">((</span>x <span class="o">-</span> mu_est<span class="p">)</span> <span class="o">^</span> <span class="m">2</span><span class="p">)</span> <span class="o">/</span> <span class="p">(</span>n <span class="o">-</span> <span class="m">1</span><span class="p">))</span>
<span class="lineno"> 9 </span><span class="o">></span> alpha <span class="o">=</span> <span class="m">0.95</span>
<span class="lineno">10 </span><span class="o">></span> Z_alpha <span class="o">=</span> qnorm<span class="p">((</span><span class="m">1</span> <span class="o">+</span> alpha<span class="p">)</span> <span class="o">/</span> <span class="m">2</span><span class="p">)</span>
<span class="lineno">11 </span><span class="o">></span> CI <span class="o">=</span> <span class="kt">c</span><span class="p">(</span>mu_est <span class="o">-</span> Z_alpha <span class="o">*</span> S <span class="o">/</span> <span class="kp">sqrt</span><span class="p">(</span>n<span class="p">),</span> mu_est <span class="o">+</span> Z_alpha <span class="o">*</span> S <span class="o">/</span> <span class="kp">sqrt</span><span class="p">(</span>n<span class="p">))</span>
<span class="lineno">12 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>CI<span class="p">)</span>
<span class="lineno">13 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">2.388738</span> <span class="m">2.545931</span></code></pre></figure></p>
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="o">>>></span> <span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 2 </span><span class="o">>>></span> <span class="kn">import</span> <span class="nn">scipy</span>
<span class="lineno"> 3 </span><span class="o">>>></span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno"> 4 </span><span class="o">>>></span> <span class="n">n</span> <span class="o">=</span> <span class="mi">1000</span>
<span class="lineno"> 5 </span><span class="o">>>></span> <span class="n">mu</span><span class="p">,</span> <span class="n">var</span> <span class="o">=</span> <span class="mf">2.5</span><span class="p">,</span> <span class="mf">1.6</span>
<span class="lineno"> 6 </span><span class="o">>>></span> <span class="n">norm</span> <span class="o">=</span> <span class="n">scipy</span><span class="o">.</span><span class="n">stats</span><span class="o">.</span><span class="n">norm</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">scale</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="lineno"> 7 </span><span class="o">>>></span> <span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">normal</span><span class="p">(</span><span class="n">mu</span><span class="p">,</span> <span class="n">var</span><span class="o">**</span><span class="mf">0.5</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="n">n</span><span class="p">)</span>
<span class="lineno"> 8 </span><span class="o">>>></span> <span class="n">mu_est</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">mean</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="lineno"> 9 </span><span class="o">>>></span> <span class="n">S</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">sqrt</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">sum</span><span class="p">((</span><span class="n">x</span><span class="o">-</span><span class="n">mu_est</span><span class="p">)</span><span class="o">**</span><span class="mi">2</span><span class="p">)</span><span class="o">/</span><span class="p">(</span><span class="n">n</span><span class="o">-</span><span class="mi">1</span><span class="p">))</span>
<span class="lineno">10 </span><span class="o">>>></span> <span class="n">alpha</span> <span class="o">=</span> <span class="mf">0.95</span>
<span class="lineno">11 </span><span class="o">>>></span> <span class="n">Z_alpha</span> <span class="o">=</span> <span class="n">norm</span><span class="o">.</span><span class="n">ppf</span><span class="p">((</span><span class="mi">1</span><span class="o">+</span><span class="n">alpha</span><span class="p">)</span><span class="o">/</span><span class="mi">2</span><span class="p">)</span>
<span class="lineno">12 </span><span class="o">>>></span> <span class="n">CI</span> <span class="o">=</span> <span class="p">[</span><span class="n">mu_est</span> <span class="o">-</span> <span class="n">Z_alpha</span><span class="o">*</span><span class="n">S</span><span class="o">/</span><span class="n">np</span><span class="o">.</span><span class="n">sqrt</span><span class="p">(</span><span class="n">n</span><span class="p">),</span> <span class="n">mu_est</span> <span class="o">+</span> <span class="n">Z_alpha</span><span class="o">*</span><span class="n">S</span><span class="o">/</span><span class="n">np</span><span class="o">.</span><span class="n">sqrt</span><span class="p">(</span><span class="n">n</span><span class="p">)]</span>
<span class="lineno">13 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">CI</span><span class="p">)</span>
<span class="lineno">14 </span><span class="p">[</span><span class="mf">2.4476842124240363</span><span class="p">,</span> <span class="mf">2.6012224501776178</span><span class="p">]</span></code></pre></figure><p>The interpretation of CI is tricky. A 95% CI does not mean the probability that the constructed CI contains the true population mean is $0.95$. Actually, a constructed CI again is a random variable because the CI is created based on each random sample collected. Following the classic explanation from textbooks, when we repeat the procedures to create CI multiple times, the probability that the true parameter falls into the CI is equal to $\alpha$. Let’s do an example to see that point.<br />
<language>R</language><br />
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> B <span class="o">=</span> <span class="m">1000</span>
<span class="lineno"> 3 </span><span class="o">></span> n <span class="o">=</span> <span class="m">1000</span>
<span class="lineno"> 4 </span><span class="o">></span> mu <span class="o">=</span> <span class="m">2.5</span>
<span class="lineno"> 5 </span><span class="o">></span> var <span class="o">=</span> <span class="m">1.6</span>
<span class="lineno"> 6 </span><span class="o">></span> alpha <span class="o">=</span> <span class="m">0.95</span>
<span class="lineno"> 7 </span><span class="o">></span> Z_alpha <span class="o">=</span> qnorm<span class="p">((</span><span class="m">1</span> <span class="o">+</span> alpha<span class="p">)</span> <span class="o">/</span> <span class="m">2</span><span class="p">)</span>
<span class="lineno"> 8 </span><span class="o">></span> coverage <span class="o">=</span> <span class="kp">rep</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> B<span class="p">)</span>
<span class="lineno"> 9 </span><span class="o">></span> <span class="kr">for</span> <span class="p">(</span>i <span class="kr">in</span> <span class="m">1</span><span class="o">:</span>B<span class="p">)</span> <span class="p">{</span>
<span class="lineno">10 </span><span class="o">+</span> x <span class="o">=</span> rnorm<span class="p">(</span>n<span class="p">,</span> mu<span class="p">,</span> <span class="kp">sqrt</span><span class="p">(</span>var<span class="p">))</span>
<span class="lineno">11 </span><span class="o">+</span> mu_est <span class="o">=</span> <span class="kp">mean</span><span class="p">(</span>x<span class="p">)</span>
<span class="lineno">12 </span><span class="o">+</span> S <span class="o">=</span> <span class="kp">sqrt</span><span class="p">(</span><span class="kp">sum</span><span class="p">((</span>x <span class="o">-</span> mu_est<span class="p">)</span> <span class="o">^</span> <span class="m">2</span><span class="p">)</span> <span class="o">/</span> <span class="p">(</span>n <span class="o">-</span> <span class="m">1</span><span class="p">))</span>
<span class="lineno">13 </span><span class="o">+</span> CI <span class="o">=</span> <span class="kt">c</span><span class="p">(</span>mu_est <span class="o">-</span> Z_alpha <span class="o">*</span> S <span class="o">/</span> <span class="kp">sqrt</span><span class="p">(</span>n<span class="p">),</span> mu_est <span class="o">+</span> Z_alpha <span class="o">*</span> S <span class="o">/</span> <span class="kp">sqrt</span><span class="p">(</span>n<span class="p">))</span>
<span class="lineno">14 </span><span class="o">+</span> coverage<span class="p">[</span>i<span class="p">]</span> <span class="o">=</span> <span class="p">(</span>mu <span class="o">>=</span> CI<span class="p">[</span><span class="m">1</span><span class="p">])</span> <span class="o">&&</span> <span class="p">(</span>mu <span class="o"><=</span> CI<span class="p">[</span><span class="m">2</span><span class="p">])</span>
<span class="lineno">15 </span><span class="o">+</span> <span class="p">}</span>
<span class="lineno">16 </span><span class="o">></span> <span class="c1"># the coverage probability should be close to alpha</span>
<span class="lineno">17 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span><span class="kp">mean</span><span class="p">(</span>coverage<span class="p">))</span>
<span class="lineno">18 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">0.941</span></code></pre></figure></p>
<h3>Bootstrap</h3>
<p>So far we have seen how to create the CI for sample mean. What if we are interested in quantifying the uncertainty of other parameters, for example, the variance of a random variable? If we estimate these parameters with maximum likelihood method, we can still construct the CI in a similar approach with the large sample theory[]. However, we would not discuss it in this book.<br />
Alternatively, we could use the bootstrap technique.</p>
<p>Bootstrap is simple yet powerful. It is a simulation-based technique. If we want to estimate a quantity $\theta$, first we write the estimator for $\theta$ as a function of a random sample i.e., $\hat{\theta}=g(X_1,…,X_n)$. Next, we just draw a random sample and calculate $\hat{\theta}$ and repeat this process $B$ times to get a collection of $\hat{\theta}$ denoted as $\hat{\theta}^{(1)},…,\hat{\theta}^{(B)}$. From these simulated $\hat{\theta}$, we could simply use the percentile $\hat{\theta}_{(1-\alpha)/2}$ and $\hat{\theta}_{(1+\alpha)/2}$ to construct the $\alpha$ CI. There are also other variants of bootstrapping method with similar ideas.</p>
<p>Let’s try to use bootstrap to construct a 95% CI for the population variance of a Gaussian distributed random variable.</p>
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> B <span class="o">=</span> <span class="m">1000</span>
<span class="lineno"> 3 </span><span class="o">></span> n <span class="o">=</span> <span class="m">1000</span>
<span class="lineno"> 4 </span><span class="o">></span> mu <span class="o">=</span> <span class="m">2.5</span>
<span class="lineno"> 5 </span><span class="o">></span> var <span class="o">=</span> <span class="m">1.6</span>
<span class="lineno"> 6 </span><span class="o">></span> alpha <span class="o">=</span> <span class="m">0.95</span>
<span class="lineno"> 7 </span><span class="o">></span> Z_alpha <span class="o">=</span> qnorm<span class="p">((</span><span class="m">1</span> <span class="o">+</span> alpha<span class="p">)</span> <span class="o">/</span> <span class="m">2</span><span class="p">)</span>
<span class="lineno"> 8 </span><span class="o">></span> var_sample <span class="o">=</span> <span class="kp">rep</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> B<span class="p">)</span>
<span class="lineno"> 9 </span><span class="o">></span> <span class="kr">for</span> <span class="p">(</span>i <span class="kr">in</span> <span class="m">1</span><span class="o">:</span>B<span class="p">)</span> <span class="p">{</span>
<span class="lineno">10 </span><span class="o">+</span> x <span class="o">=</span> rnorm<span class="p">(</span>n<span class="p">,</span> mu<span class="p">,</span> <span class="kp">sqrt</span><span class="p">(</span>var<span class="p">))</span>
<span class="lineno">11 </span><span class="o">+</span> var_sample<span class="p">[</span>i<span class="p">]</span> <span class="o">=</span> <span class="kp">sum</span><span class="p">((</span>x<span class="o">-</span><span class="kp">mean</span><span class="p">(</span>x<span class="p">))</span><span class="o">^</span><span class="m">2</span><span class="p">)</span><span class="o">/</span><span class="p">(</span>n<span class="m">-1</span><span class="p">)</span>
<span class="lineno">12 </span><span class="o">+</span> <span class="p">}</span>
<span class="lineno">13 </span><span class="o">></span> quantile<span class="p">(</span>var_sample<span class="p">,</span> <span class="kt">c</span><span class="p">((</span><span class="m">1</span><span class="o">-</span>alpha<span class="p">)</span><span class="o">/</span><span class="m">2</span><span class="p">,</span> <span class="p">(</span><span class="m">1</span><span class="o">+</span>alpha<span class="p">)</span><span class="o">/</span><span class="m">2</span><span class="p">))</span>
<span class="lineno">14 </span> <span class="m">2.5</span><span class="o">% 97.5%</span>
<span class="lineno">15 </span><span class="m">1.466778</span> <span class="m">1.738675</span> </code></pre></figure><h2 id="ht">Hypothesis testing</h2>
<p>We have talked about confidence interval, which is used to quantify the uncertainty in parameter estimation. The root cause of uncertainty in parameter estimation is that we do the inference based on random samples. Hypothesis testing is another technique related to confidence interval calculation.</p>
<p>A hypothesis test is an assertion about populations based on random samples. The assertion could be for a single population or multiple populations. When we collect a random sample, we may try to use the random sample as evidence to judge the hypothesis. Usually a hypothesis testing consists of two hypotheses:</p>
<ul>
<li>$H_0$: the null hypothesis;</li>
<li>$H_1$: the alternative hypothesis.</li>
</ul>
<p>When we perform a hypothesis testing, there are two possible outcomes, i.e., a) reject $H_0$ if the evidence is likely to support the alternative hypothesis, and b) do not reject $H_0$ because of insufficient evidence.</p>
<p>The key point to understand hypothesis testing is the significant level which is usually denoted as $\alpha$. When the null hypothesis is true, the rejection of null hypothesis is called type I error. And the significance level is the probability of committing a type I error. When the alternative hypothesis is true, the acceptance of null hypothesis is called type II error. And the probability of committing a type II error is denoted as $\beta$. $1-\beta$ is called the power of a test.</p>
<p>To conduct a hypothesis testing, there are a few steps to go. First we have to specify the null and alternative hypotheses, and the significance level. Next, we calculate the test statistic based on the data collected. Finally, we calculate the $p$-value. If the $p$-value is smaller than the significance level, we reject the null hypothesis; otherwise we accept it. Some books may describe a procedure to compare the test statistic with a critic region, which is essentially the same as the $p$-value approach. The real challenge to conduct a hypothesis testing is to calculate the $p$-value, whose calculation depends on which hypothesis test to use. Please note that the $p$-value itself is a random variable since it is calculated from the random sample. And when the null hypothesis is true, the distribution of $p$-value is uniform from $0$ to $1$.</p>
<p>$p$-value is also a conditional probability. A major misinterpretation about $p$-value is that it is the conditional probability that given the observed data the null hypothesis is true. Actually, $p$-value is the probability of the observation given the null hypothesis is true.</p>
<p>For many reasons, we will not go in-depth into the calculation of $p$-values in this book. But the basic idea is to figure out the statistical distribution of the test statistic. Let’s skip all the theories behind and go to the tools in R/Python.</p>
<h3>One-sample $t$ test</h3>
<p>Probably one-sample $t$ test is the most basic and useful hypothesis tests. It can determine if the sample mean is statistically different from a hypothesized population mean for continuous random variables. To use one-sample $t$ test some assumptions are usually required. For example, the observations should be independent. Another assumption is the normality, i.e., the population should be normal distributed or approximately normal distributed. However, the normality assumption is controversial but it is beyond the scope of this book.</p>
<p>In one-sample $t$ test, the alternative hypothesis could be two-sided, or one-sided. A two-sided $H_1$ does not specify if the population mean is greater or smaller than the hypothesized population mean. In contrast, a one-sided $H_1$ specifies the direction.</p>
<p>Now let’s see how to perform one-sample $t$ test in R/Python.</p>
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> n <span class="o">=</span> <span class="m">50</span>
<span class="lineno"> 3 </span><span class="o">></span> mu <span class="o">=</span> <span class="m">2.5</span>
<span class="lineno"> 4 </span><span class="o">></span> sigma <span class="o">=</span> <span class="m">1.0</span>
<span class="lineno"> 5 </span><span class="o">></span> <span class="c1"># set the significance level</span>
<span class="lineno"> 6 </span><span class="o">></span> alpha <span class="o">=</span> <span class="m">0.05</span>
<span class="lineno"> 7 </span><span class="o">></span> <span class="c1"># sample 50 independent observations from a normal distribution</span>
<span class="lineno"> 8 </span><span class="o">></span> x <span class="o">=</span> rnorm<span class="p">(</span>n<span class="p">,</span> mu<span class="p">,</span> sigma<span class="p">)</span>
<span class="lineno"> 9 </span><span class="o">></span> mu_test <span class="o">=</span> <span class="m">2.5</span>
<span class="lineno">10 </span><span class="o">></span>
<span class="lineno">11 </span><span class="o">></span> <span class="c1"># first, let's do two-sided t test</span>
<span class="lineno">12 </span><span class="o">></span> <span class="c1"># H0: the population mean is equal to mu_test</span>
<span class="lineno">13 </span><span class="o">></span> <span class="c1"># H1: the population mean is not equal to mu_test</span>
<span class="lineno">14 </span><span class="o">></span> t1 <span class="o">=</span> t.test<span class="p">(</span>x<span class="p">,</span> alternative <span class="o">=</span> <span class="s">"two.sided"</span><span class="p">,</span> mu <span class="o">=</span> mu_test<span class="p">)</span>
<span class="lineno">15 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>t1<span class="p">)</span>
<span class="lineno">16 </span>
<span class="lineno">17 </span> One Sample <span class="kp">t</span><span class="o">-</span>test
<span class="lineno">18 </span>
<span class="lineno">19 </span>data<span class="o">:</span> x
<span class="lineno">20 </span>t <span class="o">=</span> <span class="m">-0.21906</span><span class="p">,</span> df <span class="o">=</span> <span class="m">49</span><span class="p">,</span> p<span class="o">-</span>value <span class="o">=</span> <span class="m">0.8275</span>
<span class="lineno">21 </span>alternative hypothesis<span class="o">:</span> true mean is not equal to <span class="m">2.5</span>
<span class="lineno">22 </span><span class="m">95</span> percent confidence interval<span class="o">:</span>
<span class="lineno">23 </span> <span class="m">2.137082</span> <span class="m">2.791574</span>
<span class="lineno">24 </span>sample estimates<span class="o">:</span>
<span class="lineno">25 </span>mean of x
<span class="lineno">26 </span> <span class="m">2.464328</span>
<span class="lineno">27 </span>
<span class="lineno">28 </span><span class="o">></span> <span class="c1"># based on the p-value (> alpha), we accept the null hypothesis stated above</span>
<span class="lineno">29 </span><span class="o">></span> t1<span class="o">$</span>value
<span class="lineno">30 </span><span class="kc">NULL</span>
<span class="lineno">31 </span><span class="o">></span>
<span class="lineno">32 </span><span class="o">></span> <span class="c1"># next, let's do a one-sided t test</span>
<span class="lineno">33 </span><span class="o">></span> <span class="c1"># H0: the population mean is equal to mu_test</span>
<span class="lineno">34 </span><span class="o">></span> <span class="c1"># H1: the population mean is greater than mu_test</span>
<span class="lineno">35 </span><span class="o">></span> t2 <span class="o">=</span> t.test<span class="p">(</span>x<span class="p">,</span> alternative <span class="o">=</span> <span class="s">"greater"</span><span class="p">,</span> mu <span class="o">=</span> mu_test<span class="p">)</span>
<span class="lineno">36 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>t2<span class="p">)</span>
<span class="lineno">37 </span>
<span class="lineno">38 </span> One Sample <span class="kp">t</span><span class="o">-</span>test
<span class="lineno">39 </span>
<span class="lineno">40 </span>data<span class="o">:</span> x
<span class="lineno">41 </span>t <span class="o">=</span> <span class="m">-0.21906</span><span class="p">,</span> df <span class="o">=</span> <span class="m">49</span><span class="p">,</span> p<span class="o">-</span>value <span class="o">=</span> <span class="m">0.5862</span>
<span class="lineno">42 </span>alternative hypothesis<span class="o">:</span> true mean is greater than <span class="m">2.5</span>
<span class="lineno">43 </span><span class="m">95</span> percent confidence interval<span class="o">:</span>
<span class="lineno">44 </span> <span class="m">2.191313</span> <span class="kc">Inf</span>
<span class="lineno">45 </span>sample estimates<span class="o">:</span>
<span class="lineno">46 </span>mean of x
<span class="lineno">47 </span> <span class="m">2.464328</span>
<span class="lineno">48 </span>
<span class="lineno">49 </span><span class="o">></span> <span class="c1"># based on the p-value (> alpha), we accept the null hypothesis stated above</span>
<span class="lineno">50 </span><span class="o">></span> t2<span class="o">$</span>p.value
<span class="lineno">51 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">0.5862418</span>
<span class="lineno">52 </span><span class="o">></span>
<span class="lineno">53 </span><span class="o">></span> <span class="c1"># let's change mu_test and perform a one-sided test again</span>
<span class="lineno">54 </span><span class="o">></span> mu_tes_new <span class="o">=</span> <span class="m">2.0</span>
<span class="lineno">55 </span><span class="o">></span> t3 <span class="o">=</span> t.test<span class="p">(</span>x<span class="p">,</span> alternative <span class="o">=</span> <span class="s">"greater"</span><span class="p">,</span> mu <span class="o">=</span> mu_tes_new<span class="p">)</span>
<span class="lineno">56 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>t3<span class="p">)</span>
<span class="lineno">57 </span>
<span class="lineno">58 </span> One Sample <span class="kp">t</span><span class="o">-</span>test
<span class="lineno">59 </span>
<span class="lineno">60 </span>data<span class="o">:</span> x
<span class="lineno">61 </span>t <span class="o">=</span> <span class="m">2.8514</span><span class="p">,</span> df <span class="o">=</span> <span class="m">49</span><span class="p">,</span> p<span class="o">-</span>value <span class="o">=</span> <span class="m">0.003177</span>
<span class="lineno">62 </span>alternative hypothesis<span class="o">:</span> true mean is greater than <span class="m">2</span>
<span class="lineno">63 </span><span class="m">95</span> percent confidence interval<span class="o">:</span>
<span class="lineno">64 </span> <span class="m">2.191313</span> <span class="kc">Inf</span>
<span class="lineno">65 </span>sample estimates<span class="o">:</span>
<span class="lineno">66 </span>mean of x
<span class="lineno">67 </span> <span class="m">2.464328</span>
<span class="lineno">68 </span>
<span class="lineno">69 </span><span class="o">></span> <span class="c1"># based on the p-value (< alpha), we reject the null hypothesis stated above, i.e., we conclude the population mean is greater than 2.</span>
<span class="lineno">70 </span><span class="o">></span> t3<span class="o">$</span>p.value
<span class="lineno">71 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">0.003177329</span></code></pre></figure><language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="o">>>></span> <span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 2 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">scipy.stats</span> <span class="k">import</span> <span class="n">ttest_1samp</span>
<span class="lineno"> 3 </span><span class="o">>>></span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno"> 4 </span><span class="o">>>></span> <span class="n">n</span> <span class="o">=</span> <span class="mi">50</span>
<span class="lineno"> 5 </span><span class="o">>>></span> <span class="n">mu</span><span class="p">,</span> <span class="n">sigma</span> <span class="o">=</span> <span class="mf">2.5</span><span class="p">,</span> <span class="mf">1.0</span>
<span class="lineno"> 6 </span><span class="o">>>></span> <span class="c1"># generate 50 independent samples from a normal distribution</span>
<span class="lineno"> 7 </span><span class="o">...</span> <span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">normal</span><span class="p">(</span><span class="n">mu</span><span class="p">,</span> <span class="n">sigma</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="n">n</span><span class="p">)</span>
<span class="lineno"> 8 </span><span class="o">>>></span> <span class="n">alpha</span> <span class="o">=</span> <span class="mf">0.05</span>
<span class="lineno"> 9 </span><span class="o">>>></span> <span class="n">mu_test</span> <span class="o">=</span> <span class="mf">2.5</span>
<span class="lineno">10 </span><span class="o">>>></span> <span class="c1"># H0: population mean equals mu_test</span>
<span class="lineno">11 </span><span class="o">...</span> <span class="c1"># H1: population mean does not equal to mu_test</span>
<span class="lineno">12 </span><span class="o">...</span> <span class="c1"># perform a two-sided t test</span>
<span class="lineno">13 </span><span class="o">...</span> <span class="n">t1</span> <span class="o">=</span> <span class="n">ttest_1samp</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">mu_test</span><span class="p">)</span>
<span class="lineno">14 </span><span class="o">>>></span> <span class="c1"># based on the p-value (> alpha), we accept the null hypothesis stated aboveprint(t1.pvalue)</span>
<span class="lineno">15 </span><span class="o">...</span> <span class="nb">print</span><span class="p">(</span><span class="n">t1</span><span class="o">.</span><span class="n">pvalue</span><span class="p">)</span>
<span class="lineno">16 </span><span class="mf">0.09403837414922567</span></code></pre></figure><p>In the R code snippet we show both one-sided and two-sided one-sample $t$ tests. However, we only show a two-sided test in the Python program. It is feasible to perform a one-sided test in an indirect manner with the same function, but I don’t think it’s worth discussing here. For hypothesis testing, it seems R is a better choice than Python.</p>
<h3>Two-sample $t$ test</h3>
<p>Two-sample $t$ test is a bit of more complex than one-sample $t$ test. There are two types of $t$ tests commonly used in practice, i.e., paired $t$ test and unpaired $t$ test. In paired $t$ test, the samples are paired together. For example, we may want to know if there is a significant difference of the human blood pressures in morning and in evening. Hypothesis testing may help. To do so, we may conduct an experiment and collect the blood pressures in morning and in evening from a number of participants, respectively. Let $X_i$ denote the morning blood pressure and $Y_i$ denote the even blood pressure of participant $i$. We should pair $X_i$ and $Y_i$ since the pair is measured from the same person. Then the paired $t$ test can be used to compare the population means. The null hypothesis usually states that the difference of two population means is equal to a hypothesized value. Just like the one-sample $t$ test, we could do one-sided or two-sided paired $t$ test.</p>
<p>Now let’s see how to do paired $t$ test in R/Python.</p>
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kn">library</span><span class="p">(</span>mvtnorm<span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 3 </span><span class="o">></span> n <span class="o">=</span> <span class="m">50</span>
<span class="lineno"> 4 </span><span class="o">></span> <span class="c1"># we sample from a bivariate normal distribution</span>
<span class="lineno"> 5 </span><span class="o">></span> mu <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="m">2.0</span><span class="p">,</span> <span class="m">1.0</span><span class="p">)</span>
<span class="lineno"> 6 </span><span class="o">></span> sigma <span class="o">=</span> <span class="kp">diag</span><span class="p">(</span><span class="kt">c</span><span class="p">(</span><span class="m">1.0</span><span class="p">,</span> <span class="m">1.5</span><span class="p">))</span>
<span class="lineno"> 7 </span><span class="o">></span> <span class="c1"># significance level</span>
<span class="lineno"> 8 </span><span class="o">></span> alpha <span class="o">=</span> <span class="m">0.95</span>
<span class="lineno"> 9 </span><span class="o">></span> <span class="c1"># sample 50 independent observations from the normal distribution</span>
<span class="lineno">10 </span><span class="o">></span> mu_diff <span class="o">=</span> <span class="m">1.2</span>
<span class="lineno">11 </span><span class="o">></span> x <span class="o">=</span> rmvnorm<span class="p">(</span>n<span class="p">,</span> mu<span class="p">,</span> sigma<span class="p">)</span>
<span class="lineno">12 </span><span class="o">></span>
<span class="lineno">13 </span><span class="o">></span> <span class="c1"># first, let's do a two-sided t test</span>
<span class="lineno">14 </span><span class="o">></span> <span class="c1"># H0: the population means' difference is equal to mu_diff</span>
<span class="lineno">15 </span><span class="o">></span> <span class="c1"># H1: the population means' difference is not equal to mu_diff</span>
<span class="lineno">16 </span><span class="o">></span> t.test<span class="p">(</span>x<span class="o">=</span>x<span class="p">[,</span><span class="m">1</span><span class="p">],</span> y<span class="o">=</span>x<span class="p">[,</span><span class="m">2</span><span class="p">],</span> mu<span class="o">=</span>mu_diff<span class="p">,</span> paired <span class="o">=</span> <span class="kc">TRUE</span><span class="p">,</span> alternative <span class="o">=</span> <span class="s">"two.sided"</span><span class="p">)</span>
<span class="lineno">17 </span>
<span class="lineno">18 </span> Paired <span class="kp">t</span><span class="o">-</span>test
<span class="lineno">19 </span>
<span class="lineno">20 </span>data<span class="o">:</span> x<span class="p">[,</span> <span class="m">1</span><span class="p">]</span> and x<span class="p">[,</span> <span class="m">2</span><span class="p">]</span>
<span class="lineno">21 </span>t <span class="o">=</span> <span class="m">-0.86424</span><span class="p">,</span> df <span class="o">=</span> <span class="m">49</span><span class="p">,</span> p<span class="o">-</span>value <span class="o">=</span> <span class="m">0.3917</span>
<span class="lineno">22 </span>alternative hypothesis<span class="o">:</span> true difference <span class="kr">in</span> means is not equal to <span class="m">1.2</span>
<span class="lineno">23 </span><span class="m">95</span> percent confidence interval<span class="o">:</span>
<span class="lineno">24 </span> <span class="m">0.5417625</span> <span class="m">1.4623353</span>
<span class="lineno">25 </span>sample estimates<span class="o">:</span>
<span class="lineno">26 </span>mean of the differences
<span class="lineno">27 </span> <span class="m">1.002049</span>
<span class="lineno">28 </span>
<span class="lineno">29 </span><span class="o">></span> <span class="c1"># we can't reject H0 based on the fact p-value>alpha</span>
<span class="lineno">30 </span><span class="o">></span>
<span class="lineno">31 </span><span class="o">></span> <span class="c1"># next, let's do an one-sided t test</span>
<span class="lineno">32 </span><span class="o">></span> <span class="c1"># H0: the population means' difference is equal to mu_diff</span>
<span class="lineno">33 </span><span class="o">></span> <span class="c1"># H1: the population means' difference is less than mu_diff</span>
<span class="lineno">34 </span><span class="o">></span> t.test<span class="p">(</span>x<span class="o">=</span>x<span class="p">[,</span><span class="m">1</span><span class="p">],</span> y<span class="o">=</span>x<span class="p">[,</span><span class="m">2</span><span class="p">],</span> mu<span class="o">=</span>mu_diff<span class="p">,</span> paired <span class="o">=</span> <span class="kc">TRUE</span><span class="p">,</span> alternative <span class="o">=</span> <span class="s">"less"</span><span class="p">)</span>
<span class="lineno">35 </span>
<span class="lineno">36 </span> Paired <span class="kp">t</span><span class="o">-</span>test
<span class="lineno">37 </span>
<span class="lineno">38 </span>data<span class="o">:</span> x<span class="p">[,</span> <span class="m">1</span><span class="p">]</span> and x<span class="p">[,</span> <span class="m">2</span><span class="p">]</span>
<span class="lineno">39 </span>t <span class="o">=</span> <span class="m">-0.86424</span><span class="p">,</span> df <span class="o">=</span> <span class="m">49</span><span class="p">,</span> p<span class="o">-</span>value <span class="o">=</span> <span class="m">0.1958</span>
<span class="lineno">40 </span>alternative hypothesis<span class="o">:</span> true difference <span class="kr">in</span> means is less than <span class="m">1.2</span>
<span class="lineno">41 </span><span class="m">95</span> percent confidence interval<span class="o">:</span>
<span class="lineno">42 </span> <span class="o">-</span><span class="kc">Inf</span> <span class="m">1.386057</span>
<span class="lineno">43 </span>sample estimates<span class="o">:</span>
<span class="lineno">44 </span>mean of the differences
<span class="lineno">45 </span> <span class="m">1.002049</span>
<span class="lineno">46 </span>
<span class="lineno">47 </span><span class="o">></span> <span class="c1"># we can't reject H0 based on the fact p-value>alpha</span></code></pre></figure><p>Paired $t$ test can also be done in Python, but we would not show the examples.</p>
<p>Unpaired $t$ test is also about two population means’ difference, but the samples are not paired. For example, we may want to study if the average blood pressure of men is higher than that of women. In the unpaired $t$ test, we also have to specify if we assume the two populations have equal variance or not.</p>
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> n <span class="o">=</span> <span class="m">50</span>
<span class="lineno"> 3 </span><span class="o">></span> alpha <span class="o">=</span> <span class="m">0.05</span>
<span class="lineno"> 4 </span><span class="o">></span> mu1 <span class="o">=</span> <span class="m">2.0</span>
<span class="lineno"> 5 </span><span class="o">></span> mu2 <span class="o">=</span> <span class="m">1.0</span>
<span class="lineno"> 6 </span><span class="o">></span> sigma1 <span class="o">=</span> <span class="m">1.0</span>
<span class="lineno"> 7 </span><span class="o">></span> sigma2 <span class="o">=</span> <span class="m">1.5</span>
<span class="lineno"> 8 </span><span class="o">></span> <span class="c1"># generate two samples independently</span>
<span class="lineno"> 9 </span><span class="o">></span> x1 <span class="o">=</span> rnorm<span class="p">(</span>n<span class="p">,</span> mu1<span class="p">,</span> sigma1<span class="p">)</span>
<span class="lineno">10 </span><span class="o">></span> x2 <span class="o">=</span> rnorm<span class="p">(</span>n<span class="p">,</span> mu2<span class="p">,</span> sigma2<span class="p">)</span>
<span class="lineno">11 </span><span class="o">></span> <span class="c1"># first, let's do a two-sided t test</span>
<span class="lineno">12 </span><span class="o">></span> <span class="c1"># assume equal variance for the two populations</span>
<span class="lineno">13 </span><span class="o">></span> mu_diff <span class="o">=</span> <span class="m">0.5</span>
<span class="lineno">14 </span><span class="o">></span> <span class="c1"># H0: the population means' difference is equal to mu_diff</span>
<span class="lineno">15 </span><span class="o">></span> <span class="c1"># H1: the population means' difference is not equal to mu_diff</span>
<span class="lineno">16 </span><span class="o">></span> t.test<span class="p">(</span>x1<span class="p">,</span> x2<span class="p">,</span> alternative <span class="o">=</span> <span class="s">"two.sided"</span><span class="p">,</span> mu <span class="o">=</span> mu_diff<span class="p">,</span> paired <span class="o">=</span> <span class="kc">FALSE</span><span class="p">,</span> var.equal <span class="o">=</span> <span class="kc">TRUE</span><span class="p">)</span>
<span class="lineno">17 </span>
<span class="lineno">18 </span> Two Sample <span class="kp">t</span><span class="o">-</span>test
<span class="lineno">19 </span>
<span class="lineno">20 </span>data<span class="o">:</span> x1 and x2
<span class="lineno">21 </span>t <span class="o">=</span> <span class="m">1.2286</span><span class="p">,</span> df <span class="o">=</span> <span class="m">98</span><span class="p">,</span> p<span class="o">-</span>value <span class="o">=</span> <span class="m">0.2222</span>
<span class="lineno">22 </span>alternative hypothesis<span class="o">:</span> true difference <span class="kr">in</span> means is not equal to <span class="m">0.5</span>
<span class="lineno">23 </span><span class="m">95</span> percent confidence interval<span class="o">:</span>
<span class="lineno">24 </span> <span class="m">0.3072577</span> <span class="m">1.3192945</span>
<span class="lineno">25 </span>sample estimates<span class="o">:</span>
<span class="lineno">26 </span>mean of x mean of y
<span class="lineno">27 </span> <span class="m">1.964328</span> <span class="m">1.151052</span>
<span class="lineno">28 </span>
<span class="lineno">29 </span><span class="o">></span> <span class="c1"># we accept the null hypothesis since p-value>alpha </span>
<span class="lineno">30 </span><span class="o">></span>
<span class="lineno">31 </span><span class="o">></span> <span class="c1"># now, we don't assume the equal variance</span>
<span class="lineno">32 </span><span class="o">></span> t.test<span class="p">(</span>x1<span class="p">,</span> x2<span class="p">,</span> alternative <span class="o">=</span> <span class="s">"two.sided"</span><span class="p">,</span> mu <span class="o">=</span> mu_diff<span class="p">,</span> paired <span class="o">=</span> <span class="kc">FALSE</span><span class="p">,</span> var.equal <span class="o">=</span> <span class="kc">FALSE</span><span class="p">)</span>
<span class="lineno">33 </span>
<span class="lineno">34 </span> Welch Two Sample <span class="kp">t</span><span class="o">-</span>test
<span class="lineno">35 </span>
<span class="lineno">36 </span>data<span class="o">:</span> x1 and x2
<span class="lineno">37 </span>t <span class="o">=</span> <span class="m">1.2286</span><span class="p">,</span> df <span class="o">=</span> <span class="m">94.78</span><span class="p">,</span> p<span class="o">-</span>value <span class="o">=</span> <span class="m">0.2223</span>
<span class="lineno">38 </span>alternative hypothesis<span class="o">:</span> true difference <span class="kr">in</span> means is not equal to <span class="m">0.5</span>
<span class="lineno">39 </span><span class="m">95</span> percent confidence interval<span class="o">:</span>
<span class="lineno">40 </span> <span class="m">0.3070428</span> <span class="m">1.3195094</span>
<span class="lineno">41 </span>sample estimates<span class="o">:</span>
<span class="lineno">42 </span>mean of x mean of y
<span class="lineno">43 </span> <span class="m">1.964328</span> <span class="m">1.151052</span>
<span class="lineno">44 </span>
<span class="lineno">45 </span><span class="o">></span> <span class="c1"># there is no big change for p-value, we also accept the null hypothesis since p-value>alpha </span>
<span class="lineno">46 </span><span class="o">></span>
<span class="lineno">47 </span><span class="o">></span> <span class="c1"># let's try a one sided test without equal variance</span>
<span class="lineno">48 </span><span class="o">></span> <span class="c1"># H0: the population means' difference is equal to mu_diff</span>
<span class="lineno">49 </span><span class="o">></span> <span class="c1"># H1: the population means' difference larger than mu_diff</span>
<span class="lineno">50 </span><span class="o">></span> t.test<span class="p">(</span>x1<span class="p">,</span> x2<span class="p">,</span> alternative <span class="o">=</span> <span class="s">"less"</span><span class="p">,</span> mu <span class="o">=</span> mu_diff<span class="p">,</span> paired <span class="o">=</span> <span class="kc">FALSE</span><span class="p">,</span> var.equal <span class="o">=</span> <span class="kc">FALSE</span><span class="p">)</span>
<span class="lineno">51 </span>
<span class="lineno">52 </span> Welch Two Sample <span class="kp">t</span><span class="o">-</span>test
<span class="lineno">53 </span>
<span class="lineno">54 </span>data<span class="o">:</span> x1 and x2
<span class="lineno">55 </span>t <span class="o">=</span> <span class="m">1.2286</span><span class="p">,</span> df <span class="o">=</span> <span class="m">94.78</span><span class="p">,</span> p<span class="o">-</span>value <span class="o">=</span> <span class="m">0.8889</span>
<span class="lineno">56 </span>alternative hypothesis<span class="o">:</span> true difference <span class="kr">in</span> means is less than <span class="m">0.5</span>
<span class="lineno">57 </span><span class="m">95</span> percent confidence interval<span class="o">:</span>
<span class="lineno">58 </span> <span class="o">-</span><span class="kc">Inf</span> <span class="m">1.236837</span>
<span class="lineno">59 </span>sample estimates<span class="o">:</span>
<span class="lineno">60 </span>mean of x mean of y
<span class="lineno">61 </span> <span class="m">1.964328</span> <span class="m">1.151052</span>
<span class="lineno">62 </span>
<span class="lineno">63 </span><span class="o">></span> <span class="c1"># we accept the null hypothesis since p-value>alpha</span></code></pre></figure><p>There are many other important hypothesis tests, such as the chi-squared test<sup class="footnote" id="fnr5"><a href="#fn5">5</a></sup>, likelihood-ratio test<sup class="footnote" id="fnr6"><a href="#fn6">6</a></sup>.</p>
<hr />
<p style="vertical-align:middle;" class="footnote" id="fn1"><a href="#fnr1"><sup>1</sup></a> https://en.wikipedia.org/wiki/Exponential_family</p>
<p class="footnote" id="fn2"><a href="#fnr2"><sup>2</sup></a> https://en.wikipedia.org/wiki/Rejection_sampling</p>
<p class="footnote" id="fn3"><a href="#fnr3"><sup>3</sup></a> http://www.gaussianprocess.org/gpml/chapters/RW2.pdf</p>
<p class="footnote" id="fn4"><a href="#fnr4"><sup>4</sup></a> https://en.wikipedia.org/wiki/Copula_(probability_theory)</p>
<p class="footnote" id="fn5"><a href="#fnr5"><sup>5</sup></a> https://en.wikipedia.org/wiki/Chi-squared_test</p>
<p class="footnote" id="fn6"><a href="#fnr6"><sup>6</sup></a> https://en.wikipedia.org/wiki/Likelihood-ratio_test</p>
</div> <!-- /container-fluid -->
<!-- Footer -->
<div id="footer">
<div class="inner">
<div class="container-fluid">
<div class="row">
<div class="span16">
<p style="text-align:center">
© 2022 Anotherbookondatascience.com
</p>
</div>
</div>
</div>
</div>
</div>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {
inlineMath: [ ['$','$'] ],
processEscapes: true
},
TeX: { equationNumbers: { autoNumber: "AMS" },
Macros: {
sb: "_"
} }
});
</script>
<script
type="text/javascript"
charset="utf-8"
src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"
>
</script>
</body>
</html>