Skip to content

Commit 619441b

Browse files
committed
remove TuringExt
1 parent e322666 commit 619441b

14 files changed

Lines changed: 132 additions & 95 deletions

Project.toml

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,6 @@ version = "0.13.0"
66
[deps]
77
ArgCheck = "dce04be8-c92d-5529-be00-80e4d2c0e197"
88
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
9-
DynamicPPL = "366bfd00-2699-11ea-058f-f148b4cae6d8"
109
FunctionZeros = "b21f74c0-b399-568f-9643-d20f4fa2c814"
1110
HCubature = "19dc6840-f33b-545b-b366-655c7e3ffd49"
1211
Interpolations = "a98d9a8b-a2ab-59e6-89dd-64a1c18fca59"
@@ -22,16 +21,13 @@ StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
2221

2322
[weakdeps]
2423
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
25-
Turing = "fce5fe82-541a-59a6-adf8-730c64b5f9a0"
2624

2725
[extensions]
2826
PlotsExt = "Plots"
29-
TuringExt = "Turing"
3027

3128
[compat]
3229
ArgCheck = "2.5.0"
3330
Distributions = "v0.24.6, 0.25"
34-
DynamicPPL = "0.31.0,0.32.0,0.33.0,0.34.0,0.35.0,0.36.0"
3531
FunctionZeros = "0.2.0,0.3.0, 1"
3632
HCubature = "1"
3733
Interpolations = "0.14.0,0.15.0,0.16.0"
@@ -44,14 +40,12 @@ SpecialFunctions = "1,2"
4440
Statistics = "1"
4541
StatsAPI = "1.0.0"
4642
StatsBase = "0.33.0,0.34.0"
47-
Turing = "0.35.0,0.36.0,0.37,0.38.0,0.39.0"
4843
julia = "1"
4944

5045
[extras]
5146
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
5247
StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
5348
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
54-
Turing = "fce5fe82-541a-59a6-adf8-730c64b5f9a0"
5549

5650
[targets]
5751
test = ["Test"]

docs/Project.toml

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,26 +4,25 @@ DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
44
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
55
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
66
FillArrays = "1a297f60-69ca-5386-bcde-b61e274b549b"
7-
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
87
KernelDensity = "5ab0869b-81aa-558d-bb23-cbf5423bbe9b"
98
NeuralEstimators = "38f6df31-6b4a-4144-b2af-7ace2da57606"
10-
ParetoSmooth = "a68b5a21-f429-434e-8bfa-46b447300aac"
11-
Pigeons = "0eb8d820-af6a-4919-95ae-11206f830c31"
129
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
1310
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
1411
Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
1512
StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
1613
StatsModels = "3eaba693-59b7-5ba5-a881-562e759f1c8d"
1714
StatsPlots = "f3b207a7-027a-5e70-b257-86293d7955fd"
1815
Turing = "fce5fe82-541a-59a6-adf8-730c64b5f9a0"
16+
TuringUtilities = "35dc62cd-6c01-44e1-a736-6cc36bfce0cc"
17+
18+
[sources]
19+
TuringUtilities = {url = "https://github.com/itsdfish/TuringUtilities.jl"}
1920

2021
[compat]
2122
Colors = "0.12.0,0.13.0"
2223
DataFrames = "1.0.0"
2324
Documenter = "1"
2425
KernelDensity = "0.6.0"
25-
ParetoSmooth = "0.7.0"
26-
Pigeons = "0.4.2"
2726
Plots = "1.0.0"
2827
StatsBase = "0.33.0,0.34.0"
2928
StatsModels = "0.7.0"

docs/make.jl

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,6 @@ makedocs(
1717
),
1818
modules = [
1919
SequentialSamplingModels,
20-
Base.get_extension(SequentialSamplingModels, :TuringExt),
2120
Base.get_extension(SequentialSamplingModels, :PlotsExt)
2221
],
2322
pages = [

docs/src/DDM.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ z = 0.50
6666
Now that values have been assigned to the parameters, we will pass them to `DDM` to generate the model object.
6767

6868
```@example DDM
69-
dist = DDM(ν, α, τ, z)
69+
dist = DDM(; ν, α, z, τ)
7070
```
7171

7272
## Simulate Model
@@ -101,7 +101,7 @@ The code below overlays the PDF on reaction time histograms for each option.
101101

102102
```@example DDM
103103
histogram(dist)
104-
plot!(dist; t_range=range(.301, 1, length=100))
104+
plot!(dist; t_range=range(.501, 1, length=100))
105105
```
106106

107107
# References

docs/src/lba.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ A = 0.80
4848
```
4949
### Threshold - Maximum Starting Point
5050

51-
Evidence accumulates until accumulator reaches a threshold $\alpha = k +A$. The threshold is parameterized this way to faciliate parameter estimation and to ensure that $A \le \alpha$.
51+
Evidence accumulates until accumulator reaches a threshold $\alpha = k + A$. The threshold is parameterized this way to faciliate parameter estimation and to ensure that $A \le \alpha$.
5252
```@example lba
5353
k = 0.50
5454
```
@@ -88,14 +88,14 @@ logpdf.(dist, choices, rts)
8888
## Compute Choice Probability
8989
The choice probability $\Pr(C=c)$ is computed by passing the model and choice index to `cdf` along with a large value for time as the second argument.
9090
```@example lba
91-
cdf(dist, 1, Inf)
91+
cdf(dist, 1, 100)
9292
```
9393

9494
## Plot Simulation
9595
The code below overlays the PDF on reaction time histograms for each option.
9696
```@example lba
9797
histogram(dist)
98-
plot!(dist; t_range=range(.3,2.5, length=100), xlims=(0, 2.5))
98+
plot!(dist; t_range=range(.3, 2.5, length=100), xlims=(0, 2.5))
9999
100100
```
101101
# References

docs/src/lnr.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ In the code below, we will define parameters for the LBA and create a model obje
2525

2626
### $\nu$
2727

28-
We will set $\nu [-1,-1.5]$.
28+
We will set $\nu = [-1,-1.5]$.
2929

3030
```@example lnr
3131
ν = [-1,-1.5]
@@ -55,7 +55,7 @@ Non-decision time is an additive constant representing encoding and motor respon
5555
Now that values have been asigned to the parameters, we will pass them to `LNR` to generate the model object.
5656

5757
```@example lnr
58-
dist = LNR(ν, σ, τ)
58+
dist = LNR(; ν, σ, τ)
5959
```
6060
## Simulate Model
6161

@@ -81,7 +81,7 @@ logpdf.(dist, choices, rts)
8181
The choice probability $\Pr(C=c)$ is computed by passing the model and choice index to `cdf`.
8282

8383
```@example lnr
84-
cdf(dist, 1, Inf)
84+
cdf(dist, 1, 100)
8585
```
8686
To compute the joint probability of choosing $c$ within $t$ seconds, i.e., $\Pr(T \leq t \wedge C=c)$, pass a third argument for $t$.
8787

docs/src/mlba.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ In this case, the preferences have reversed: job `B` is now preferred over job `
202202
The choice probability $\Pr(C=c)$ is computed by passing the model and choice index to `cdf` along with a large value for time as the second argument.
203203

204204
```@example MLBA
205-
cdf(dist, 1, Inf, M₃)
205+
cdf(dist, 1, 100, M₃)
206206
```
207207

208208
## Plot Simulation

docs/src/poisson_race.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ logpdf.(dist, choices, rts)
7373
## Compute Choice Probability
7474
The choice probability $\Pr(C=c)$ is computed by passing the model and choice index to `cdf` along with a large value for time as the second argument.
7575
```@example poisson_race
76-
cdf(dist, 1, Inf)
76+
cdf(dist, 1, 100)
7777
```
7878

7979
## Plot Simulation

docs/src/predictive_distributions.md

Lines changed: 93 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -15,34 +15,65 @@ using Distributions
1515
using Plots
1616
using Random
1717
using SequentialSamplingModels
18-
using Turing
18+
using Turing
19+
using TuringUtilities
1920
Random.seed!(1124)
2021

2122
n_samples = 50
22-
rts = rand(Wald=1.5, α=.8, τ=.3), n_samples)
23+
rts = rand(Wald = 1.5, α = 0.8, τ = 0.3), n_samples)
2324

2425
@model function wald_model(rts)
2526
ν ~ truncated(Normal(1.5, 1), 0, Inf)
26-
α ~ truncated(Normal(.8, 1), 0, Inf)
27+
α ~ truncated(Normal(0.8, 1), 0, Inf)
2728
τ = 0.3
2829
rts ~ Wald(ν, α, τ)
29-
return (;ν, α, τ)
30+
return (; ν, α, τ)
3031
end
3132

3233
model = wald_model(rts)
3334

3435
prior_chain = sample(model, Prior(), 1000)
3536

36-
pred_model = predict_distribution(Wald; model, func=mean, n_samples)
37-
prior_preds = generated_quantities(pred_model, prior_chain)
38-
39-
post_chain = sample(model, NUTS(1000, .85), 1000)
40-
post_preds = generated_quantities(pred_model, post_chain)
41-
42-
histogram(prior_preds[:], xlims=(0,4), xlabel="Mean RT", ylabel="Density", norm=true,
43-
color=:grey, label="prior", grid=false)
44-
histogram!(post_preds[:], alpha=.7, color=:darkred, norm=true, label="posterior", grid=false)
45-
vline!([mean(rts)], linestyle=:dash, color=:black, linewidth=2, label="data")
37+
pred_model = predict_distribution(;
38+
simulator = Θ -> rand(Wald(; Θ...), n_samples),
39+
model,
40+
func = mean
41+
)
42+
prior_preds = returned(pred_model, prior_chain)
43+
44+
post_chain = sample(model, NUTS(1000, 0.85), 1000)
45+
post_preds = returned(pred_model, post_chain)
46+
47+
histogram(
48+
prior_preds[:],
49+
xlims = (0, 4),
50+
xlabel = "Mean RT",
51+
ylabel = "Density",
52+
norm = true,
53+
color = :grey,
54+
label = "prior",
55+
grid = false
56+
)
57+
58+
histogram!(
59+
post_preds[:],
60+
alpha = 0.7,
61+
color = :darkred,
62+
norm = true,
63+
label = "posterior",
64+
grid = false
65+
)
66+
67+
vline!([mean(rts)], linestyle = :dash, color = :black, linewidth = 2, label = "data")
68+
69+
pred_quantiles = predict_distribution(;
70+
simulator = Θ -> rand(Wald(; Θ...), n_samples),
71+
model,
72+
func = compute_quantiles
73+
)
74+
post_quantile_preds = returned(pred_quantiles, post_chain)
75+
q_data = compute_quantiles(rts)
76+
plot_quantiles(q_data, post_quantile_preds)
4677
```
4778
```@raw html
4879
</details>
@@ -56,15 +87,16 @@ using Distributions
5687
using Plots
5788
using Random
5889
using SequentialSamplingModels
59-
using Turing
90+
using Turing
91+
using TuringUtilities
6092
Random.seed!(1124)
6193
```
6294

6395
## Generate Simulated Data
6496
We will use the [Wald](wald.md) model as a simple example to illustrate how to create predictive distributions. The `Wald` model describes the evidence accumulation process underlying single detection decisions, such as respending when a stimulus appears. In the code block below, we will generate 50 data points.
6597
```julia
6698
n_samples = 50
67-
rts = rand(Wald=1.5, α=.8, τ=.3), n_samples)
99+
rts = rand(Wald = 1.5, α = 0.8, τ = 0.3), n_samples)
68100
```
69101

70102
## Define Turing Model
@@ -74,10 +106,10 @@ Next, we will develop a Turing model for generating prior and posterior predicti
74106
```julia
75107
@model function wald_model(rts)
76108
ν ~ truncated(Normal(1.5, 1), 0, Inf)
77-
α ~ truncated(Normal(.8, 1), 0, Inf)
109+
α ~ truncated(Normal(0.8, 1), 0, Inf)
78110
τ = 0.3
79111
rts ~ Wald(ν, α, τ)
80-
return (;ν, α, τ)
112+
return (; ν, α, τ)
81113
end
82114
```
83115
In the next code block, we will pass the data and create a model object.
@@ -93,46 +125,74 @@ prior_chain = sample(model, Prior(), 1000)
93125
For the next step, we will generate predictions from the model using the parameters sampled from the prior distribution. When `Turing` is loaded, `SequentialSamplingModels` automatically loads `predict_distribution` into your session. The signature for `predict_distribution` is as follows:
94126

95127
```julia
96-
predict_distribution(dist, args...; model, func, n_samples, kwargs...)
128+
predict_distribution(args...; simulator, model, func, kwargs...)
97129
```
98-
`func` computes a statistic from simulated data of the model and has the general form `func(sim_data, args...; kwargs...)`. Thus, the only constraint is that `func` must recieve the simulated data as its first argument. `args...` and `kwargs...` are optionally pased to `func`. The remaining inputs are the model type `dist`, the Turing model object `model`, and the number of simulated observations `n_samples`.
130+
`
131+
The function `simulator` accepts a `NamedTuple` of sampled parameters and returns simulated data. `func` computes a statistic from simulated data of the model and has the general form `func(sim_data, args...; kwargs...)`. Thus, the only constraint is that `func` must recieve the simulated data as its first argument. `args...` and `kwargs...` are optionally pased to `func`. The keyword `model` the Turing model object.
99132

100-
As a simple illustration, we will compute the prior predictive mean by calling the following two functions. The first function creates a new function to sample from the predictive distribution and the second function `generated_quantities` performs the sampling.
133+
As a simple illustration, we will compute the prior predictive mean by calling the following two functions. The first function creates a new function to sample from the predictive distribution and the second function `returned` performs the sampling.
101134

102135
```julia
103-
pred_model = predict_distribution(Wald; model, func=mean, n_samples)
104-
prior_preds = generated_quantities(pred_model, prior_chain)
136+
pred_model = predict_distribution(;
137+
simulator = Θ -> rand(Wald(; Θ...), n_samples),
138+
model,
139+
func = mean
140+
)
141+
prior_preds = returned(pred_model, prior_chain)
105142
```
106143

107144
## Generate Posterior Predictive Distribution
108145

109-
Generating a posterior predictive distribution involves a similar process. First, we will estimate the parameters from the data to obtain a chain of posterior samples. Next, we will generate the posterior predictive distribution using `generated_quantities`:
146+
Generating a posterior predictive distribution involves a similar process. First, we will estimate the parameters from the data to obtain a chain of posterior samples. Next, we will generate the posterior predictive distribution using `returned`:
110147

111148
```julia
112-
post_chain = sample(model, NUTS(1000, .85), 1000)
113-
post_preds = generated_quantities(pred_model, post_chain)
149+
post_chain = sample(model, NUTS(1000, 0.85), 1000)
150+
post_preds = returned(pred_model, post_chain)
114151
```
115152

116153
## Plot the Distributions
117154

118155
Now that we have generated the predictive distributions, we can compare them to the data by plotting them as a histogram. The histogram below reveals two insights: first, the data are centered near the prior and posterior predictive distributions, indicating they predict the data accurately; second, the posterior distribution is concentrated more closely around the data, indicating the information gain acquired during parameter estimation.
119156

120157
```julia
121-
histogram(prior_preds[:], xlims=(0,4), xlabel="Mean RT", ylabel="Density", norm=true,
122-
color=:grey, label="prior", grid=false)
123-
histogram!(post_preds[:], alpha=.7, color=:darkred, norm=true, label="posterior", grid=false)
124-
vline!([mean(rts)], linestyle=:dash, color=:black, linewidth=2, label="data")
158+
histogram(
159+
prior_preds[:],
160+
xlims = (0, 4),
161+
xlabel = "Mean RT",
162+
ylabel = "Density",
163+
norm = true,
164+
color = :grey,
165+
label = "prior",
166+
grid = false
167+
)
168+
169+
histogram!(
170+
post_preds[:],
171+
alpha = 0.7,
172+
color = :darkred,
173+
norm = true,
174+
label = "posterior",
175+
grid = false
176+
)
177+
178+
vline!([mean(rts)], linestyle = :dash, color = :black, linewidth = 2, label = "data")
179+
125180
```
126181
![](assets/wald_predictive_means.png)
127182
## Posterior Predictive Distribution of Quantiles
128183
One goal of SSMs is to accurately characterize the distribution of reaction times. The previous example only evaluated one aspective of the model---namely, the predicted mean. Given the interest in characterizing the shape of the RT distribution, we need a different method. One method for evaluating the model's ability to capture the shape of the distribution is to compare the quantiles. In the example below, the quantiles of the data and model are evaluated at the deciles: $[.1,.2,\dots, .9]$. If the model matches the data accurately, the quantiles will fall along the identity line.
129184

130-
```julia
131-
pred_quantiles = predict_distribution(Wald; model, func=compute_quantiles, n_samples=20)
132-
post_quantile_preds = generated_quantities(pred_quantiles, post_chain)
185+
```julia
186+
pred_quantiles = predict_distribution(;
187+
simulator = Θ -> rand(Wald(; Θ...), n_samples),
188+
model,
189+
func = compute_quantiles
190+
)
191+
post_quantile_preds = returned(pred_quantiles, post_chain)
133192
q_data = compute_quantiles(rts)
134193
plot_quantiles(q_data, post_quantile_preds)
135194
```
195+
136196
![](assets/wald_predictive_qq_plots.png)
137197

138198
The posterior predictive quantile-quantile plot above shows that the model fits the reaction time distribution well. This close match is to be expected, as we generated the data from the same model.

docs/src/shifted_lognormal.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ Non-decision time is an additive constant representing encoding and motor respon
5656
Now that values have been asigned to the parameters, we will pass them to `shifted lognormal` to generate the model object.
5757

5858
```@example shifted_lognormal
59-
dist = ShiftedLogNormal(ν, σ, τ)
59+
dist = ShiftedLogNormal(; ν, σ, τ)
6060
```
6161
## Simulate Model
6262

@@ -91,7 +91,7 @@ cdf(dist, .75)
9191
The code below overlays the PDF on reaction time histogram.
9292
```@example shifted_lognormal
9393
histogram(dist)
94-
plot!(dist; t_range=range(.130, 1.5, length=100))
94+
plot!(dist; t_range=range(.30, 1.5, length=100))
9595
```
9696
# References
9797

0 commit comments

Comments
 (0)