Skip to content

Commit 2ce8928

Browse files
committed
docs pointing to surrogate names
1 parent 41506b3 commit 2ce8928

6 files changed

Lines changed: 7 additions & 4 deletions

File tree

2 Bytes
Binary file not shown.
701 Bytes
Binary file not shown.

docs/build/html/_sources/evaluation.rst.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,8 @@ The evaluation metrics used in geneRNIB are summarized below.
1515
----
1616

1717

18-
For a detailed description of each metric, refer to the geneRNIB paper. Not all the metrics were applicable to all datasets, as shown in the table. In addition, only those datasets with * passed the applicability criteria for a given metric, which includes minimal variability and performance threshold set for each metric.
18+
For a detailed description of each metric, refer to the geneRNIB paper. To map the naming conventions used in the code and this table, refer to `surrogate_names` in `config.py` file in the `src/utils/` directory.
19+
Not all the metrics were applicable to all datasets, as shown in the table. In addition, only those datasets with * passed the applicability criteria for a given metric, which includes minimal variability and performance threshold set for each metric.
1920
In addition, not all metrics passed the additional criteria for inclusion in the final score calculation, as explained in the paper, and marked with ** in the table. This includes context specificity and robustness in stability analysis.
2021

2122
The evaluation metrics expect the inferred network to be in the form of an AnnData object with specific format as explained here.

docs/build/html/evaluation.html

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,8 @@ <h1>GRN evaluation<a class="headerlink" href="#grn-evaluation" title="Link to th
8787
<a class="reference internal image-reference" href="_images/metric_quality_evaluation.png"><img alt="_images/metric_quality_evaluation.png" class="align-center" src="_images/metric_quality_evaluation.png" style="width: 100%;" />
8888
</a>
8989
<hr class="docutils" />
90-
<p>For a detailed description of each metric, refer to the geneRNIB paper. Not all the metrics were applicable to all datasets, as shown in the table. In addition, only those datasets with * passed the applicability criteria for a given metric, which includes minimal variability and performance threshold set for each metric.
90+
<p>For a detailed description of each metric, refer to the geneRNIB paper. To map the naming conventions used in the code and this table, refer to <cite>surrogate_names</cite> in <cite>config.py</cite> file in the <cite>src/utils/</cite> directory.
91+
Not all the metrics were applicable to all datasets, as shown in the table. In addition, only those datasets with * passed the applicability criteria for a given metric, which includes minimal variability and performance threshold set for each metric.
9192
In addition, not all metrics passed the additional criteria for inclusion in the final score calculation, as explained in the paper, and marked with ** in the table. This includes context specificity and robustness in stability analysis.</p>
9293
<p>The evaluation metrics expect the inferred network to be in the form of an AnnData object with specific format as explained here.
9394
It should be noted that the metric currently evaluate only the <strong>top TF-gene pairs</strong>, currently limited to <strong>50,000 edges</strong>, ranked by their assigned weight.</p>

docs/build/html/searchindex.js

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

docs/source/evaluation.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,8 @@ The evaluation metrics used in geneRNIB are summarized below.
1515
----
1616

1717

18-
For a detailed description of each metric, refer to the geneRNIB paper. Not all the metrics were applicable to all datasets, as shown in the table. In addition, only those datasets with * passed the applicability criteria for a given metric, which includes minimal variability and performance threshold set for each metric.
18+
For a detailed description of each metric, refer to the geneRNIB paper. To map the naming conventions used in the code and this table, refer to `surrogate_names` in `config.py` file in the `src/utils/` directory.
19+
Not all the metrics were applicable to all datasets, as shown in the table. In addition, only those datasets with * passed the applicability criteria for a given metric, which includes minimal variability and performance threshold set for each metric.
1920
In addition, not all metrics passed the additional criteria for inclusion in the final score calculation, as explained in the paper, and marked with ** in the table. This includes context specificity and robustness in stability analysis.
2021

2122
The evaluation metrics expect the inferred network to be in the form of an AnnData object with specific format as explained here.

0 commit comments

Comments
 (0)