You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/lba.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ A = 0.80
48
48
```
49
49
### Threshold - Maximum Starting Point
50
50
51
-
Evidence accumulates until accumulator reaches a threshold $\alpha = k +A$. The threshold is parameterized this way to faciliate parameter estimation and to ensure that $A \le \alpha$.
51
+
Evidence accumulates until accumulator reaches a threshold $\alpha = k +A$. The threshold is parameterized this way to faciliate parameter estimation and to ensure that $A \le \alpha$.
52
52
```@example lba
53
53
k = 0.50
54
54
```
@@ -88,14 +88,14 @@ logpdf.(dist, choices, rts)
88
88
## Compute Choice Probability
89
89
The choice probability $\Pr(C=c)$ is computed by passing the model and choice index to `cdf` along with a large value for time as the second argument.
90
90
```@example lba
91
-
cdf(dist, 1, Inf)
91
+
cdf(dist, 1, 100)
92
92
```
93
93
94
94
## Plot Simulation
95
95
The code below overlays the PDF on reaction time histograms for each option.
We will use the [Wald](wald.md) model as a simple example to illustrate how to create predictive distributions. The `Wald` model describes the evidence accumulation process underlying single detection decisions, such as respending when a stimulus appears. In the code block below, we will generate 50 data points.
65
97
```julia
66
98
n_samples =50
67
-
rts =rand(Wald(ν=1.5, α=.8, τ=.3), n_samples)
99
+
rts =rand(Wald(ν=1.5, α=0.8, τ=0.3), n_samples)
68
100
```
69
101
70
102
## Define Turing Model
@@ -74,10 +106,10 @@ Next, we will develop a Turing model for generating prior and posterior predicti
74
106
```julia
75
107
@modelfunctionwald_model(rts)
76
108
ν ~truncated(Normal(1.5, 1), 0, Inf)
77
-
α ~truncated(Normal(.8, 1), 0, Inf)
109
+
α ~truncated(Normal(0.8, 1), 0, Inf)
78
110
τ =0.3
79
111
rts ~Wald(ν, α, τ)
80
-
return (;ν, α, τ)
112
+
return (;ν, α, τ)
81
113
end
82
114
```
83
115
In the next code block, we will pass the data and create a model object.
For the next step, we will generate predictions from the model using the parameters sampled from the prior distribution. When `Turing` is loaded, `SequentialSamplingModels` automatically loads `predict_distribution` into your session. The signature for `predict_distribution` is as follows:
`func` computes a statistic from simulated data of the model and has the general form `func(sim_data, args...; kwargs...)`. Thus, the only constraint is that `func` must recieve the simulated data as its first argument. `args...` and `kwargs...` are optionally pased to `func`. The remaining inputs are the model type `dist`, the Turing model object `model`, and the number of simulated observations `n_samples`.
130
+
`
131
+
The function `simulator` accepts a `NamedTuple` of sampled parameters and returns simulated data. `func` computes a statistic from simulated data of the model and has the general form `func(sim_data, args...; kwargs...)`. Thus, the only constraint is that `func` must recieve the simulated data as its first argument. `args...` and `kwargs...` are optionally pased to `func`. The keyword `model` the Turing model object.
99
132
100
-
As a simple illustration, we will compute the prior predictive mean by calling the following two functions. The first function creates a new function to sample from the predictive distribution and the second function `generated_quantities` performs the sampling.
133
+
As a simple illustration, we will compute the prior predictive mean by calling the following two functions. The first function creates a new function to sample from the predictive distribution and the second function `returned` performs the sampling.
Generating a posterior predictive distribution involves a similar process. First, we will estimate the parameters from the data to obtain a chain of posterior samples. Next, we will generate the posterior predictive distribution using `generated_quantities`:
146
+
Generating a posterior predictive distribution involves a similar process. First, we will estimate the parameters from the data to obtain a chain of posterior samples. Next, we will generate the posterior predictive distribution using `returned`:
Now that we have generated the predictive distributions, we can compare them to the data by plotting them as a histogram. The histogram below reveals two insights: first, the data are centered near the prior and posterior predictive distributions, indicating they predict the data accurately; second, the posterior distribution is concentrated more closely around the data, indicating the information gain acquired during parameter estimation.
vline!([mean(rts)], linestyle =:dash, color =:black, linewidth =2, label ="data")
179
+
125
180
```
126
181

127
182
## Posterior Predictive Distribution of Quantiles
128
183
One goal of SSMs is to accurately characterize the distribution of reaction times. The previous example only evaluated one aspective of the model---namely, the predicted mean. Given the interest in characterizing the shape of the RT distribution, we need a different method. One method for evaluating the model's ability to capture the shape of the distribution is to compare the quantiles. In the example below, the quantiles of the data and model are evaluated at the deciles: $[.1,.2,\dots, .9]$. If the model matches the data accurately, the quantiles will fall along the identity line.
The posterior predictive quantile-quantile plot above shows that the model fits the reaction time distribution well. This close match is to be expected, as we generated the data from the same model.
0 commit comments