You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/neuralestimators_amorized.md
+69-29Lines changed: 69 additions & 29 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Neural Parameter Estimation
2
2
3
-
Neural parameter estimation provides a likelihood-free approach to parameter recovery, especially useful for models with computationally intractable likelihoods. This method is based on training neural networks to learn the mapping from data to parameters. See the review paper by Zammit-Mangion et al. (2025) for more details. Once trained, these networks can perform inference rapidly across multiple datasets, making them particularly valuable for models like the Leaky Competing Accumulator (LCA; Usher & McClelland, 2001).
3
+
Neural parameter estimation uses neural networks to perform parameter estimation by learning the mapping between simulated data and model parameters (for a detailed review, see Zammit-Mangion et al., 2024). Neural parameter estimation is particularly useful for models with computationally intractable likelihoods, such as the Leaky Competing Accumulator (LCA; Usher & McClelland, 2001). Once trained, neural networks can be saved and reused to perform inference rapidly across multiple datasets, or used for computationally intensive parameter recovery simulations to understand the quality of parameter estimates under ideal conditions.
4
4
5
5
Below, we demonstrate how to estimate parameters of the LCA model using the [NeuralEstimators.jl](https://github.com/msainsburydale/NeuralEstimators) package.
6
6
@@ -24,7 +24,12 @@ Random.seed!(123)
24
24
25
25
## Define Parameter Sampling
26
26
27
-
Unlike traditional Bayesian inference methods, simulation-based inference approaches require us to define a prior sampling function specifically to generate synthetic training data. While traditional methods like MCMC also sample from the prior, those samples are used directly during inference rather than to create a separate training dataset. In SBI, we use the prior to sample a wide range of parameters and simulate corresponding data, which we then use to train a model (e.g., a neural network) to approximate the posterior. We will use the following function to sample a range of parameters for training:
27
+
Unlike traditional Bayesian inference methods, neural parameter estimation requires us to define two functions so that the neural network can learn the mapping between simulated data and parameters. One function samples parameters from a prior distribution, and the other generates simulated data based on a sampled parameter vector. While traditional methods like MCMC also sample from the prior, those samples are used directly during inference rather than to create a separate training dataset.
28
+
29
+

30
+
*Schematic of neural parameter estimation. Once trained, the neural network provides a direct mapping from observed data (Z) to parameter estimates (θ̂), enabling rapid inference without the computational burden of traditional methods.*
31
+
32
+
In neural parameter estimation, we use the prior to sample a wide range of parameters and simulate corresponding data, which we then use to train a model (e.g., a neural network) to approximate a point estimate or the posterior. We use the following function to sample a range of parameters for training:
28
33
29
34
```julia
30
35
# Function to sample parameters from priors
@@ -63,7 +68,8 @@ function simulate(θ, n_trials_per_param)
63
68
choices, rts =rand(model, n_trials_per_param)
64
69
65
70
# Return as a transpose matrix where each column is a trial
66
-
return [choices rts]'
71
+
returnFloat32.([choices rts]')
72
+
67
73
end
68
74
69
75
return simulated_data
@@ -77,11 +83,11 @@ For LCA parameter recovery, we use a DeepSet architecture which respects the per
77
83
```julia
78
84
# Create neural network architecture for parameter recovery
79
85
functioncreate_neural_estimator(;
80
-
ν_bounds = (0.1, 4.0),
81
-
α_bounds = (0.5, 3.5),
82
-
β_bounds = (0.0, 0.5),
83
-
λ_bounds = (0.0, 0.5),
84
-
τ_bounds = (0.1, 0.5)
86
+
ν_bounds = (0.1, 6.0),
87
+
α_bounds = (0.3, 4.5),
88
+
β_bounds = (0.0, 0.8),
89
+
λ_bounds = (0.0, 0.8),
90
+
τ_bounds = (0.100, 2.0)
85
91
)
86
92
# Unpack defined parameter Bounds
87
93
ν_min, ν_max = ν_bounds # Drift rates
@@ -132,6 +138,8 @@ function create_neural_estimator(;
132
138
end
133
139
```
134
140
141
+
The result of our constructed neural network is a point estimator that corresponds to a Bayes estimator, which is a functional of the posterior distribution. Under the specified loss function, this point estimate corresponds to the posterior mean. For details on the theoretical foundations of neural Bayes estimators, see Sainsbury-Dale et al. (2024).
142
+
135
143
## Training the Neural Estimator
136
144
137
145
Neural estimators, like all deep learning methods, require a training phase during which they learn the mapping from data to parameters. Here, we train the estimator by simulating data on the fly: the sampler provides new parameter vectors from the prior, and the simulator generates corresponding data conditional on those parameters. Since we use online training and the network never sees the same simulated dataset twice, overfitting is less likely. For more details on training, see the API for arguments [here](https://msainsburydale.github.io/NeuralEstimators.jl/dev/API/core/#Training).
@@ -163,9 +171,9 @@ We can assess the performance of our trained estimator on held-out test data:
Neural estimators are particularly effective for models with computationally intractable likelihoods like the LCA model. However, certain parameters (particularly β and λ) can be difficult to recover, even with advanced neural network architectures. This is a property of the LCA model rather than a limitation of the estimation technique.
274
+
Neural estimators are particularly effective for models with computationally intractable likelihoods like the LCA model. However, certain parameters (particularly β and λ) can be difficult to recover, even with advanced neural network architectures. This is a property of the LCA model rather than a limitation of the estimation technique.
251
275
252
276
Additional details can be found in the [NeuralEstimators.jl documentation](https://github.com/msainsburydale/NeuralEstimators).
253
277
@@ -296,7 +320,7 @@ function simulate(θ, n_trials_per_param)
296
320
choices, rts =rand(model, n_trials_per_param)
297
321
298
322
# Return as a transpose matrix where each column is a trial
299
-
return [choices rts]'
323
+
returnFloat32.([choices rts]')
300
324
301
325
end
302
326
@@ -305,11 +329,11 @@ end
305
329
306
330
# Create neural network architecture for parameter recovery
0 commit comments