Skip to content

Commit 9236f71

Browse files
committed
Update readme
1 parent 04868c8 commit 9236f71

4 files changed

Lines changed: 51 additions & 38 deletions

File tree

vista3d/README.md

Lines changed: 46 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -18,22 +18,45 @@ limitations under the License.
1818
<div align="center"> <img src="./assets/imgs/workflow.png" width="100%"/> </div>
1919

2020
## News!
21-
[10/27/2025] We release VISTA3D-CTMR, a joint CT-MR automatic segmentation model trained on over 30K CT and MRI scans, supporting over 300 classes.
21+
[10/27/2025] We release NV-Segment-CTMR, a joint CT-MR automatic segmentation model trained on over 30K CT and MRI scans, supporting over 300 classes.
2222

2323
[03/12/2025] We provide VISTA3D as a baseline for the challenge "CVPR 2025: Foundation Models for Interactive 3D Biomedical Image Segmentation"([link](https://www.codabench.org/competitions/5263/)). The simplified code based on MONAI 1.4 is provided in the [here](./cvpr_workshop/).
2424

2525
[02/26/2025] VISTA3D paper has been accepted by **CVPR2025**!
2626

2727
## Overview
28-
**VISTA3D-CT** ([`Paper`](https://arxiv.org/pdf/2406.05285)) is a foundation model trained systematically on 11,454 volumes encompassing 127 types of human anatomical structures and various lesions. The model provides State-of-the-art performances on:
28+
29+
## Model Comparison
30+
31+
32+
| Feature | VISTA3D | NV-Segment-CT | NV-Segment-CTMR |
33+
|---------|---------------|-----------------|---------------|
34+
| **Anatomical Classes** | [132 classes (7 types of tumors)](data/jsons/label_dict.json) | Same as VISTA3D | [345+ classes](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CTMR/configs/metadata.json) [Details](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CTMR/configs/label_dict.json) |
35+
| **Modalities** | CT only | Same as VISTA3D | CT + MRI (body & brain) |
36+
| **Segmentation Type** | Automatic + Interactive (point-click) | Same as VISTA3D | Automatic only |
37+
| **Model Weights** | [NV-Segment-CT on HuggingFace (MONAI 1.3)](https://huggingface.co/nvidia/NV-Segment-CT) | [NV-Segment-CT on HuggingFace (MONAI1.4, minor layer naming change)](https://huggingface.co/nvidia/NV-Segment-CT) | [NV-Segment-CTMR on HuggingFace](https://huggingface.co/nvidia/NV-Segment-CTMR) |
38+
| **Implementation**| Current Repo: MONAI1.3 research code | Optimized MONAI Bundle (MONAI>=1.4) | Optimized MONAI Bundle (MONAI>=1.4) |
39+
| **Usage**| Full training for all models, inference and finetune | Optimized and fast inference. Light weight finetune. Wrapped into config and bundle | Same as NV-Segment-CT |
40+
| **License** | [Commercial Friendly](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) | Same as VISTA3D | [Non-Commercial](https://developer.download.nvidia.com/licenses/NVIDIA-OneWay-Noncommercial-License-22Mar2022.pdf?t=eyJscyI6InJlZiIsImxzZCI6IlJFRi1naXRodWIuY29tL252aWRpYS1ob2xvc2NhbiJ9) |
41+
42+
```
43+
We recommend users to use NV-Segment-CTMR for large scale automatic segmentation for CT and MRI scans because it is trained with large and diverse datasets. For CT tumor or interactive refinement, user should try NV-Segment-CT.
44+
```
45+
46+
**VISTA3D/NV-Segment-CT** ([`Paper`](https://arxiv.org/pdf/2406.05285)) is a foundation model trained systematically on 11,454 volumes encompassing 127 types of human anatomical structures and various lesions. The model provides State-of-the-art performances on:
47+
2948
- out-of-the-box automatic segmentation on 3D CT scans
3049
- zero-shot interactive segmentation in 3D CT scans
3150
- automatic segemntation + interactive refinement
3251

33-
**VISTA3D-CTMR** starts from VISTA3D-CT checkpoint and finetuned on over 30K CT and MRI scans, supporting over 300 classes.
52+
**NV-Segment-CTMR** starts from NV-Segment-CT checkpoint and finetuned on over 30K CT and MRI scans, supporting over 300 classes.
53+
3454
- out-of-the-box automatic segmentation on 3D CT scans
3555
- share the same architecture with VISTA3D-CT model but we only trained the automatic segmentation branch with larger CT and MRI datasets.
3656

57+
<div align="center"> <img src="./assets/imgs/ctmr.png" width="49%"/><img src="./assets/imgs/ctmr2.png" width="49%"/> </div>
58+
59+
3760
<div align="center"> <img src="./assets/imgs/benchmarkct.png" width="49%"/><img src="./assets/imgs/benchmarkmr.png" width="49%"/> </div>
3861

3962
## Usage
@@ -51,12 +74,17 @@ cd NV-Segment-CTMR/NV-Segment-CTMR;
5174
pip install -r requirements.txt;
5275
cd ..;
5376
mkdir NV-Segment-CT/models;mkdir NV-Segment-CTMR/models
54-
# download from huggingface link
55-
wget -O NV-Segment-CT/models/model.pt https://huggingface.co/nvidia/NV-Segment-CT/resolve/main/vista3d_pretrained_model/model.pt
56-
wget -O NV-Segment-CTMR/models/model.pt https://huggingface.co/nvidia/NV-Segment-CTMR/resolve/main/vista3d_pretrained_model/model.pt
77+
# download from huggingface link for CT
78+
hf download nvidia/NV-Segment-CT vista3d_pretrained_model/model.pt --local-dir NV-Segment-CT/models/ && \
79+
mv NV-Segment-CT/models/vista3d_pretrained_model/model.pt NV-Segment-CT/models/model.pt && \
80+
rmdir NV-Segment-CT/models/vista3d_pretrained_model
81+
# download from huggingface link for CTMR
82+
hf download nvidia/NV-Segment-CTMR vista3d_pretrained_model/model.pt --local-dir NV-Segment-CTMR/models/ && \
83+
mv NV-Segment-CTMR/models/vista3d_pretrained_model/model.pt NV-Segment-CTMR/models/model.pt && \
84+
rmdir NV-Segment-CTMR/models/vista3d_pretrained_model
5785
```
5886

59-
## 1.1 **VISTA3D-CT**[[Github]](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/tree/main/NV-Segment-CT)[[Huggingface]](https://huggingface.co/nvidia/NV-Segment-CT)
87+
## 1.1 **NV-Segment-CT**[[Github]](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/tree/main/NV-Segment-CT)[[Huggingface]](https://huggingface.co/nvidia/NV-Segment-CT)
6088

6189
#### Automatic Segmentation (support multi-gpu batch processing)
6290
[class definition](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CT/configs/label_dict.json)
@@ -81,8 +109,15 @@ python -m monai.bundle run --config_file configs/inference.json --input_dict "{'
81109
**NOTE** MONAI bundle accepts multiple json config files and input arguments. The latter configs/arguments will overide the previous configs/arguments if they have overlapping keys.
82110

83111

84-
## 1.2 **VISTA3D-CTMR**[[Github]](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/tree/main/NV-Segment-CTMR)[[Huggingface]](https://huggingface.co/nvidia/NV-Segment-CTMR/tree/main)
85-
We defined 345 classes as in [label_dict.json](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CTMR/configs/label_dict.json). It shows the label organ name, index, training dataset, modality and evaluation dice score. If a class only comes from CT training dataset, it may not perform well on MRI, but the actual performance will vary case by case. We support three type of segment everything "CT_BODY", "MRI_BODY", and "MRI_BRAIN". "CT_BODY" is the previous VISTA3D bundle supported 132 CT classes. "MRI_BODY" shares the same 50 label class as TotalsegmentatorMR. "MRI_BRAIN" is trained on skull stripped [LUMIR](https://github.com/JHU-MedImage-Reg/LUMIR_L2R) dataset and will segment brain MRI substructures. Preprocessing is needed. Following [tutorials](https://github.com/junyuchen245/MIR/tree/main/tutorials/brain_MRI_preprocessing)
112+
## 1.2 **NV-Segment-CTMR**[[Github]](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/tree/main/NV-Segment-CTMR)[[Huggingface]](https://huggingface.co/nvidia/NV-Segment-CTMR/tree/main)
113+
Please read the complete usage in the NV-Segment-CTMR [[Github]](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/tree/main/NV-Segment-CTMR) repo.
114+
115+
We defined 345 classes as in [metadata.json](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CTMR/configs/metadata.json) and details in [label_dict.json](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CTMR/configs/label_dict.json). It shows the label organ name, index, training dataset, modality and evaluation dice score. If a class only comes from CT training dataset, it may not perform well on MRI, but the actual performance will vary case by case. We support three type of segment everything "CT_BODY", "MRI_BODY", and "MRI_BRAIN".
116+
117+
- "CT_BODY" is the previous VISTA3D bundle supported 132 CT classes. Same as NV-Segment-CT everything prompts.
118+
- "MRI_BODY" shares the same 50 label class as TotalsegmentatorMR.
119+
- "MRI_BRAIN" is trained on skull stripped [LUMIR](https://github.com/JHU-MedImage-Reg/LUMIR_L2R) dataset and will segment 133 brain MRI substructures. We followed [MIR Preprocessing](https://github.com/junyuchen245/MIR/tree/main/tutorials/brain_MRI_preprocessing) tutorials and put the corresponding components into this repo. `All contrasts of brain MRI are supported`
120+
86121
### Quick Start
87122
#### Automatic Segmentation (support multi-gpu batch processing)
88123
[class definition](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CTMR/configs/label_dict.json)
@@ -101,35 +136,9 @@ torchrun --nproc_per_node=2 --nnodes=1 -m monai.bundle run --config_file="['conf
101136
```
102137

103138
#### Brain MRI segmentation
104-
For brain MRI segmentation, we only support T1 and require preprocessing.
105-
```bash
106-
# Install required preprocessing packages.
107-
cd NV-Segment-CTMR;
108-
conda activate vista3d-nv
109-
git clone https://github.com/junyuchen245/MIR.git
110-
# For MRI brain T1 segmentation, we need preprocessing
111-
cd MIR; pip install -e . --no-deps
112-
pip install antspyx pymedio pydicom SimpleITK
113-
# skull strip docker file
114-
curl -O https://raw.githubusercontent.com/freesurfer/freesurfer/dev/mri_synthstrip/synthstrip-docker && chmod +x synthstrip-docker
115-
cd ..;
116-
```
117-
139+
For brain MRI segmentation, skull stripping, bias correction and MNI space alignment is included in the codebase. Skull stripping requires docker enviroments. More details can be found in run_brain_segmentation.sh.
118140
```bash
119-
# Run preprocessing, segmentation, and revert results back.
120-
input=example/brain_t1.nii.gz # change to your file path
121-
output=example/brain_t1_preprocessed.nii.gz # intermediate results save path
122-
# segmentation results will be saved to ./eval/$output_trans.nii.gz. User can modify saved name in inference.json file.
123-
segmentation_saved=eval/brain_t1_preprocessed/brain_t1_preprocessed_trans.nii.gz
124-
# Skull stripping with SynthStrip. Skip if already skull stripped.
125-
./MIR/synthstrip-docker -i $input -o $output
126-
# Affine align to the LUMIR template
127-
python MIR/tutorials/brain_MRI_preprocessing/preprocess.py $output MIR/tutorials/brain_MRI_preprocessing/LUMIR_template.nii.gz $output --save-preprocess $output.preprocess.json
128-
# Segment the brain
129-
python -m monai.bundle run --config_file configs/inference.json --input_dict "{'image':'$output'}" --modality MRI_BRAIN
130-
# Revert the image back
131-
# Revert a processed mask back to the original space
132-
python MIR/tutorials/brain_MRI_preprocessing/revert_preprocess.py $output --out $output.revert.nii.gz --mask $segmentation_saved --mask-out $segmentation_saved --meta $output.preprocess.json
141+
./brain_t1_preprocess/run_brain_segmentation.sh --input example/brain_t1.nii.gz --output_dir results/
133142
```
134143

135144
## 2. CVPR2025 research repo (current codebase, CT only)

vista3d/README_research.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,8 +56,12 @@ cd ./VISTA/vista3d
5656
conda create -y -n vista3d python=3.9
5757
conda activate vista3d
5858
pip install -r requirements.txt
59+
mkdir models
60+
hf download nvidia/NV-Segment-CT vista3d_pretrained_model/model_monai1.3.pt --local-dir models/ && \
61+
mv models/vista3d_pretrained_model/model_monai1.3.pt models/model.pt && \
62+
rmdir models/vista3d_pretrained_model
5963
```
60-
Download the [model checkpoint](https://huggingface.co/nvidia/NV-Segment-CT/resolve/main/vista3d_pretrained_model/model_monai1.3.pt) and save it at ./models/model.pt. The researh repo ussed monai1.3 checkpoint while NVSegment-CTMR used monai1.4, which has subtle layer naming difference.
64+
The researh repo ussed monai1.3 checkpoint while NVSegment-CTMR used monai1.4, which has subtle layer naming difference.
6165

6266
## Inferencce
6367

vista3d/assets/imgs/ctmr.png

685 KB
Loading

vista3d/assets/imgs/ctmr2.png

308 KB
Loading

0 commit comments

Comments
 (0)