|
8371 | 8371 | "We don't have the other two steps here because we used `nx=n1`, but if we did, their losses will also be included in the plot.\n", |
8372 | 8372 | "- `model2`: step 3, the RNN that optmizer predictional of any residual neural activity.\n", |
8373 | 8373 | "- `model2_Cz`: step 4, the mapping from the states of the second RNN to the behavior.\n", |
8374 | | - "See the **Supplementary Fig. 1** and **Methods**, in the DPAD paper ([Sani et al, 2024](https://doi.org/10.1038/s41593-024-01731-2)) for details of all steps." |
| 8374 | + "See the **Supplementary Fig. 1** and **Methods** in the DPAD paper ([Sani et al, 2024](https://doi.org/10.1038/s41593-024-01731-2)) for details of all optimization steps." |
8375 | 8375 | ] |
8376 | 8376 | }, |
8377 | 8377 | { |
8378 | 8378 | "cell_type": "markdown", |
8379 | 8379 | "metadata": {}, |
8380 | 8380 | "source": [ |
8381 | | - "### Alternative goal: best nonlinearity for neural self-prediction\n", |
8382 | | - "Let's now run flexible DPAD for best nonlinearity for neural self-prediction: \n", |
8383 | | - "If alternatively interested in optimized neural self-prediction, you can add **'y'** to 'GSUT' in the method code as 'GSUTy' and fit the flexible DPAD with this updated method code as follows:" |
| 8381 | + "### Alternative goal: using neural self-prediction as the metric to select nonlinearity setting\n", |
| 8382 | + "Let's now run flexible DPAD, but use neural self-prediction as the metric to select the nonlinearity setting. To do this, we add **'y'** to 'GSUT' in the method code as 'GSUTy' and fit the flexible DPAD with this updated method code as follows. Note that this is just changing how the nonlinearity setting is selected, but the fitting of the final DPAD model with the selected nonlinearity will still be entirely optimizing behavior decoding since we have `nx=n1`." |
8384 | 8383 | ] |
8385 | 8384 | }, |
8386 | 8385 | { |
|
15521 | 15520 | "source": [ |
15522 | 15521 | "As the last line of the log shows, in this example the optimal nonlinearity giving rise to best neural (y) self-prediction decoding is: `DPAD_RTR2_ACzCy1HL64U_ErSV16` : $A'$, $C_z$ and $C_y$ nonlinear.\n", |
15523 | 15522 | "\n", |
15524 | | - "We can next redo step 1.2 with this alternative nonlinearity setting to fit one final model for which nonlinearity setting has been selected to yield the best neural self-prediction. Note that even in this case, with this nonlinearity setting, the process of fitting the final model still involves the same optimization steps as before, with the first optimization step optimizing behavior prediction. See the **Supplementary Fig. 1** and **Methods**, in the DPAD paper ([Sani et al, 2024](https://doi.org/10.1038/s41593-024-01731-2)) for details of all steps. " |
| 15523 | + "We can next redo step 1.2 with this alternative nonlinearity setting to fit one final model for which nonlinearity setting has been selected to yield the best neural self-prediction. Note that even in this case, with this nonlinearity setting, the process of fitting the final model still involves the same optimization steps as before, with the first optimization step optimizing behavior prediction. See the **Supplementary Fig. 1** and **Methods** in the DPAD paper ([Sani et al, 2024](https://doi.org/10.1038/s41593-024-01731-2)) for details of all optimization steps. " |
15525 | 15524 | ] |
15526 | 15525 | }, |
15527 | 15526 | { |
|
15929 | 15928 | }, |
15930 | 15929 | { |
15931 | 15930 | "cell_type": "code", |
15932 | | - "execution_count": 21, |
| 15931 | + "execution_count": null, |
15933 | 15932 | "metadata": {}, |
15934 | 15933 | "outputs": [ |
15935 | 15934 | { |
|
15970 | 15969 | "cell_type": "markdown", |
15971 | 15970 | "metadata": {}, |
15972 | 15971 | "source": [ |
15973 | | - "We can see that both when the goal is behavior decoding (top), and when the goal is neural self-prediciton (bottom), nonlinear DPAD is outperforming linear DPAD. " |
| 15972 | + "We can see that both when the goal is behavior decoding (top), and when the goal is neural self-prediciton (bottom), nonlinear DPAD is outperforming linear DPAD. \n", |
| 15973 | + "\n", |
| 15974 | + "**Note**: As noted earlier, \"Flexible DPAD (best neural self-prediction)\" is still optimizing behavior decoding in its first optimizatin step, and the only reason we call it (\"best neural self-prediction\") is that the metric used to select the nonlinearity setting is the neural self-prediction. If one is interested in only optimizing neural self-prediction, at the expense of behavior decoding, then a special case of DPAD, which we can NDM can be used, where we set `n1=0` to skip the first and second optimization steps and learn all latent states using the 3rd and 4th optimization steps. See the **Supplementary Fig. 1** and **Methods** in the DPAD paper ([Sani et al, 2024](https://doi.org/10.1038/s41593-024-01731-2)) for details of all optimization steps. NDM loses much of the capabilities of DPAD because it no longer optimizes behavior, so in the interest of not causing any confusion, we will not include a full demo of NDM here. We have however included some examples with NDM in simulation in our other Notebook on DPAD ([DPAD_tutorial.ipynb](https://github.dev/ShanechiLab/DPAD/blob/main/source/DPAD/example/DPAD_tutorial.ipynb))." |
15974 | 15975 | ] |
15975 | 15976 | } |
15976 | 15977 | ], |
|
0 commit comments