You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/build/html/_sources/evaluation.rst.txt
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ GRN evaluation
3
3
=================
4
4
The evaluation metrics used in geneRNIB are summarized below. For a detailed description of each metric, refer to the geneRNIB paper.
5
5
6
-
We originally defined **eight evaluation metrics**, grouped into three categories: **Regression 1, Regression 2, and Wasserstein Distance**.
6
+
We originally defined **eight evaluation metrics**, grouped into three categories: **Regression 1, Regression, and Wasserstein Distance**.
7
7
However, we recently removed **Regression 1** as it did not prove to be effective for perturbational settings.
8
8
9
9
- The **regression-based metrics** assess the predictive power of an inferred GRN by using regression models to predict perturbation data (evaluation data) based on the feature space constructed from the inferred network.
Copy file name to clipboardExpand all lines: docs/build/html/evaluation.html
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -76,7 +76,7 @@
76
76
<sectionid="grn-evaluation">
77
77
<h1>GRN evaluation<aclass="headerlink" href="#grn-evaluation" title="Link to this heading"></a></h1>
78
78
<p>The evaluation metrics used in geneRNIB are summarized below. For a detailed description of each metric, refer to the geneRNIB paper.</p>
79
-
<p>We originally defined <strong>eight evaluation metrics</strong>, grouped into three categories: <strong>Regression 1, Regression 2, and Wasserstein Distance</strong>.
79
+
<p>We originally defined <strong>eight evaluation metrics</strong>, grouped into three categories: <strong>Regression 1, Regression, and Wasserstein Distance</strong>.
80
80
However, we recently removed <strong>Regression 1</strong> as it did not prove to be effective for perturbational settings.</p>
81
81
<ulclass="simple">
82
82
<li><p>The <strong>regression-based metrics</strong> assess the predictive power of an inferred GRN by using regression models to predict perturbation data (evaluation data) based on the feature space constructed from the inferred network.</p></li>
Copy file name to clipboardExpand all lines: docs/source/evaluation.rst
+23-2Lines changed: 23 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,22 @@
1
1
2
2
GRN evaluation
3
3
=================
4
-
The evaluation metrics used in geneRNIB are summarized below. For a detailed description of each metric, refer to the geneRNIB paper.
5
-
4
+
The evaluation metrics used in geneRNIB are summarized below.
6
5
7
6
8
7
.. image:: images/metrics.png
9
8
:width:90%
10
9
:align:center
11
10
----
12
11
12
+
.. image:: images/datasets_metrics.png
13
+
:width:90%
14
+
:align:center
15
+
----
16
+
17
+
18
+
For a detailed description of each metric, refer to the geneRNIB paper.
19
+
13
20
The evaluation metrics expect the inferred network to be in the form of an AnnData object with specific format as explained here.
14
21
It should be noted that the metric currently evaluate only the **top TF-gene pairs**, currently limited to **50,000 edges**, ranked by their assigned weight.
15
22
@@ -21,6 +28,7 @@ The inferred network should have a tabular format with the following columns:
21
28
22
29
See `resources/grn_benchmark/prior/collectri.h5ad` for an example of the expected format.
23
30
31
+
## Running GRN evaluation using standard pipeline
24
32
25
33
To run the evalution for a given GRN and dataset, use the following command:
geneRNIB is a living benchmark platform for GRN inference. This platform provides curated datasets for GRN inference and evaluation, standardized evaluation protocols and metrics, computational infrastructure, and a dynamically updated leaderboard to track state-of-the-art methods. It runs novel GRNs in the cloud, offers competition scores, and stores them for future comparisons, reflecting new developments over time.
5
+
geneRNIB is a living benchmark platform for GRN inference. This platform provides curated datasets for GRN inference and evaluation, standardized evaluation protocols and metrics, computational infrastructure, and a dynamically updated leaderboard to track state-of-the-art methods.
6
+
It runs novel GRNs in the cloud, offers competition scores, and stores them for future comparisons, reflecting new developments over time.
6
7
7
-
The platform supports the integration of new inference methods, datasets, and protocols. When a new feature is added, previously evaluated GRNs are re-assessed, and the leaderboard is updated accordingly. The aim is to evaluate both the accuracy and completeness of inferred GRNs. It is designed for both single-modality and multi-omics GRN inference.
8
+
The platform supports the integration of new inference methods, datasets, and protocols. When a new feature is added, previously evaluated GRNs are re-assessed, and the leaderboard is updated accordingly.
9
+
It is designed for both single-modality and multi-omics GRN inference.
8
10
9
11
.. image:: images/overview.png
10
12
:width:70%
11
13
:align:center
12
14
----
13
15
14
-
This documentation is supplementary to the paper `geneRNIB: a living benchmark for gene regulatory network inference <add a link here>`_ and the `GitHub page <https://github.com/openproblems-bio/task_grn_inference>`_ on the OpenProblems platform.
16
+
This documentation is supplementary to the paper `geneRNIB: a living benchmark for gene regulatory network inference <https://www.biorxiv.org/content/10.1101/2025.02.25.640181v1.full.pdf>`_ and the `GitHub page <https://github.com/openproblems-bio/task_grn_inference>`_ on the OpenProblems platform.
15
17
16
-
To install geneRNIB, see the `GitHub page <https://github.com/openproblems-bio/task_grn_inference>`_.
17
-
18
-
For instructions on how to download and access datasets, refer to the :doc:`dataset` section.
18
+
- To install geneRNIB, see the `GitHub page <https://github.com/openproblems-bio/task_grn_inference>`_
19
+
- To download, see :doc:`dataset` page
20
+
- To perform GRN inference using our integrated methods, see :doc:`inference` page
21
+
- To run evaluation metrics, see :doc:`evaluation` page
22
+
- To extend geneRNIB with new methods, metrics, or datasets, see :doc:`extending` page
23
+
- To view the leaderboard of integrated methods, see :doc:`leaderboard` page
19
24
20
-
For information on evaluation metrics, refer to the :doc:`evaluation` section.
25
+
.. .. image:: images/grn_models.png
26
+
.. :width: 70%
27
+
.. :align: center
28
+
.. ----
21
29
22
-
To integrate your GRN inference method, metric, or dataset, follow the instructions in the :doc:`extending` section.
23
30
24
-
To see the comparitive performance of the integrated GRN inference methods, refer to the :doc:`leaderboard` section.
25
-
26
-
.. image:: images/grn_models.png
27
-
:width:70%
28
-
:align:center
29
-
----
31
+
.. Pls see the GitHub page for the list of currently integrated methods. The methods are implemented in Python and R, and they can be used to infer GRNs from the datasets provided by geneRNIB.
30
32
33
+
.. In addition, three baseline methods are integrated into geneRNIB. These methods are used to evaluate the performance of new methods. The baseline methods are:
31
34
32
-
Pls see the GitHub page for the list of currently integrated methods. The methods are implemented in Python and R, and they can be used to infer GRNs from the datasets provided by geneRNIB.
33
-
34
-
In addition, three baseline methods are integrated into geneRNIB. These methods are used to evaluate the performance of new methods. The baseline methods are:
35
-
36
-
- **Negative control**: Randomly assigns weights to edges. GRN inference methods should outperform this method.
37
-
- **Pearson correlation**: Assigns weights based on the Pearson correlation between genes.
38
-
- **Positive control**: Similar to Pearson correlation with the exception that it uses both inference and evaluation dataset to infer the GRN. This method is expected to outperform most methods.
35
+
.. - **Negative control**: Randomly assigns weights to edges. GRN inference methods should outperform this method.
36
+
.. - **Pearson correlation**: Assigns weights based on the Pearson correlation between genes.
37
+
.. - **Positive control**: Similar to Pearson correlation with the exception that it uses both inference and evaluation dataset to infer the GRN. This method is expected to outperform most methods.
0 commit comments