You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### [Policy learning and dynamic treatment regimes]{#policy}
216
216
217
-
-*Estimation of an optimal dynamic treatment regime (DTR)* is
218
-
implemented in `r pkg("DynTxRegime")` (Q-Learning, Interactive Q-Learning,
219
-
weighted learning, and value-search methods based on Augmented Inverse
220
-
Probability Weighted Estimators and Inverse Probability Weighted
221
-
Estimators); methods based on
222
-
marginal quantile, marginal mean, and mean absolute difference are
223
-
implemented in `r pkg("quantoptr")` as well as doubly-robust methods for
224
-
quantile-optimal treatment regime). `r pkg("lmtp")` also provides doubly-robust
225
-
causal effect estimates for modified treatment policies, dynamic treatment regimes (and static interventions). `r pkg("DTRreg")` proposes different
226
-
methods such as G-estimation, dynamic weighted OLS and Q-learning, as well
227
-
as several variance estimation approaches, it can handle survival
228
-
outcomes and continuous treatment variables, while `r pkg("DTRKernSmooth")` uses kernel smoothing to examine optimal linear regimes. `r pkg("QTOCen")` provides
229
-
methods for estimation of mean- and quantile-optimal treatment regimes
230
-
from censored data. `r pkg("simml")` and `r pkg("simsl")` offer
231
-
Single-Index Models with Multiple-Links for, respectively, experimental
232
-
and observational data. `r pkg("personalized")` implements methods for
233
-
estimation of individualized treatment rules from observational and
234
-
randomized data with options for variable-selection and gradient boosting
235
-
based estimation, and for outcome model augmentation (for continuous,
236
-
binary, count, and time-to-event outcomes). `r pkg("polle")` provides a unified framework for
237
-
learning and evaluating finite stage policies based on observational data
238
-
with methods such as doubly robust restricted Q-learning, policy tree
239
-
learning, and outcome weighted learning. Flexible machine learning methods
240
-
can be used to estimate the nuisance components and valid inference for the
241
-
policy value is ensured via cross-fitting.
242
-
-`r pkg("OTRselect")` implements a penalized
243
-
regression method that can simultaneously estimate the optimal
244
-
treatment strategy and identify important variables for either
245
-
censored or uncensored continuous response. `r pkg("DTRlearn2")` offers
246
-
Q-learning and outcome-weighted learning methods with variable selection
`r pkg("smartsizer")` provides a set of tools for determining the necessary
250
-
sample size in order to identify the optimal DTR; `r pkg("DTRlearn2")` also
251
-
implements estimators for general K-stage DTRs from SMARTs.
217
+
*Direct methods* perform a direct classification task: outcome-weighted learning `r pkg("DTRlearn2")`; efficient augmentation/relaxation learning (EARL), residual learning, weighted learning, value-search methods based on Augmented Inverse Probability Weighted Estimators and Inverse Probability Weighted Estimators `r pkg("DynTxRegime")`.
218
+
*Indirect methods*, which proceed through nuisance estimation to perform Q-learning `r pkg("DTRlearn2")`, `r pkg("DynTxRegime")`, `r pkg("DTRreg")`, Interactive Q-Learning `r pkg("DynTxRegime")` or double robust Q-Learning.
219
+
r pkg("polle") provides a unified framework for learning and evaluating finite stage policies based on observational data with methods such as doubly robust restricted Q-learning, policy tree learning, and outcome weighted learning. Flexible machine learning methods can be used to estimate the nuisance components and valid inference for the policy value is ensured via cross-fitting. The package wraps and extends some functionalities from other packages `r pkg("DynTxRegime") `, `r pkg("policytree") `, `r pkg("grf") `, `r pkg("DTRlearn2"))`.
Set of tools for determining the necessary sample size in order to identify the optimal DTR `r pkg("smartsizer")`. Estimators for general K-stage DTRs from SMARTs `r pkg("DTRlearn2")`.
223
+
224
+
*With variable selection*
225
+
Outcome weighted learning with variable selection via penalization `r pkg("DTRlearn2")`. Estimation of individualized treatment rules from observational and randomized data with options for variable-selection and gradient boosting based estimation, and for outcome model augmentation for continuous, binary, count, and time-to-event outcomes `r pkg("personalized")`. Penalized regression that can simultaneously estimate the optimal treatment strategy and identify important variables for either censored or uncensored continuous response `r pkg("OTRselect")`.
226
+
227
+
*Other methods using quantiles*:
228
+
Marginal quantile, marginal mean, and mean absolute difference and doubly-robust methods for quantile-optimal treatment regime `r pkg("quantoptr")`, estimation of mean- and quantile-optimal treatment regimes from censored data `r pkg("QTOCen")`.
229
+
230
+
*Other approaches*
231
+
Learn optimal policies via doubly robust empirical welfare maximization over trees `r pkg("policytree")`. Doubly-robust causal effect estimated for modified treatment policies, dynamic treatment regimes and static interventions `r pkg("lmtp")`. G-estimation, dynamic weighted OLS, several variance estimation approaches with possibilities to handle survival outcomes and continuous treatment variables `r pkg("DTRreg")`. Methods using kernel smoothing to examine optimal linear regimes `r pkg("DTRKernSmooth")`. Single-Index Models with Multiple-Links for, respectively, experimental and observational data `r pkg("simml")`, `r pkg("simsl")`
0 commit comments