You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[10/27/2025] We release VISTA3D-CTMR, a joint CT-MR automatic segmentation model trained on over 30K CT and MRI scans, supporting over 300 classes.
21
+
[10/27/2025] We release NV-Segment-CTMR, a joint CT-MR automatic segmentation model trained on over 30K CT and MRI scans, supporting over 300 classes.
22
22
23
23
[03/12/2025] We provide VISTA3D as a baseline for the challenge "CVPR 2025: Foundation Models for Interactive 3D Biomedical Image Segmentation"([link](https://www.codabench.org/competitions/5263/)). The simplified code based on MONAI 1.4 is provided in the [here](./cvpr_workshop/).
24
24
25
25
[02/26/2025] VISTA3D paper has been accepted by **CVPR2025**!
26
26
27
27
## Overview
28
-
**VISTA3D-CT** ([`Paper`](https://arxiv.org/pdf/2406.05285)) is a foundation model trained systematically on 11,454 volumes encompassing 127 types of human anatomical structures and various lesions. The model provides State-of-the-art performances on:
|**Anatomical Classes**|[132 classes (7 types of tumors)](data/jsons/label_dict.json)| Same as VISTA3D |[345+ classes](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CTMR/configs/metadata.json)[Details](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CTMR/configs/label_dict.json)|
35
+
|**Modalities**| CT only | Same as VISTA3D | CT + MRI (body & brain) |
36
+
|**Segmentation Type**| Automatic + Interactive (point-click) | Same as VISTA3D | Automatic only |
37
+
|**Model Weights**|[NV-Segment-CT on HuggingFace (MONAI 1.3)](https://huggingface.co/nvidia/NV-Segment-CT)|[NV-Segment-CT on HuggingFace (MONAI1.4, minor layer naming change)](https://huggingface.co/nvidia/NV-Segment-CT)|[NV-Segment-CTMR on HuggingFace](https://huggingface.co/nvidia/NV-Segment-CTMR)|
38
+
|**Implementation**| Current Repo: MONAI1.3 research code | Optimized MONAI Bundle (MONAI>=1.4) | Optimized MONAI Bundle (MONAI>=1.4) |
39
+
|**Usage**| Full training for all models, inference and finetune | Optimized and fast inference. Light weight finetune. Wrapped into config and bundle | Same as NV-Segment-CT |
40
+
|**License**|[Commercial Friendly](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/)| Same as VISTA3D |[Non-Commercial](https://developer.download.nvidia.com/licenses/NVIDIA-OneWay-Noncommercial-License-22Mar2022.pdf?t=eyJscyI6InJlZiIsImxzZCI6IlJFRi1naXRodWIuY29tL252aWRpYS1ob2xvc2NhbiJ9)|
41
+
42
+
```
43
+
We recommend users to use NV-Segment-CTMR for large scale automatic segmentation for CT and MRI scans because it is trained with large and diverse datasets. For CT tumor or interactive refinement, user should try NV-Segment-CT.
44
+
```
45
+
46
+
**VISTA3D/NV-Segment-CT** ([`Paper`](https://arxiv.org/pdf/2406.05285)) is a foundation model trained systematically on 11,454 volumes encompassing 127 types of human anatomical structures and various lesions. The model provides State-of-the-art performances on:
47
+
29
48
- out-of-the-box automatic segmentation on 3D CT scans
30
49
- zero-shot interactive segmentation in 3D CT scans
31
50
- automatic segemntation + interactive refinement
32
51
33
-
**VISTA3D-CTMR** starts from VISTA3D-CT checkpoint and finetuned on over 30K CT and MRI scans, supporting over 300 classes.
52
+
**NV-Segment-CTMR** starts from NV-Segment-CT checkpoint and finetuned on over 30K CT and MRI scans, supporting over 300 classes.
53
+
34
54
- out-of-the-box automatic segmentation on 3D CT scans
35
55
- share the same architecture with VISTA3D-CT model but we only trained the automatic segmentation branch with larger CT and MRI datasets.
**NOTE** MONAI bundle accepts multiple json config files and input arguments. The latter configs/arguments will overide the previous configs/arguments if they have overlapping keys.
We defined 345 classes as in [label_dict.json](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CTMR/configs/label_dict.json). It shows the label organ name, index, training dataset, modality and evaluation dice score. If a class only comes from CT training dataset, it may not perform well on MRI, but the actual performance will vary case by case. We support three type of segment everything "CT_BODY", "MRI_BODY", and "MRI_BRAIN". "CT_BODY" is the previous VISTA3D bundle supported 132 CT classes. "MRI_BODY" shares the same 50 label class as TotalsegmentatorMR. "MRI_BRAIN" is trained on skull stripped [LUMIR](https://github.com/JHU-MedImage-Reg/LUMIR_L2R) dataset and will segment brain MRI substructures. Preprocessing is needed. Following [tutorials](https://github.com/junyuchen245/MIR/tree/main/tutorials/brain_MRI_preprocessing)
Please read the complete usage in the NV-Segment-CTMR [[Github]](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/tree/main/NV-Segment-CTMR) repo.
114
+
115
+
We defined 345 classes as in [metadata.json](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CTMR/configs/metadata.json) and details in [label_dict.json](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR/blob/main/NV-Segment-CTMR/configs/label_dict.json). It shows the label organ name, index, training dataset, modality and evaluation dice score. If a class only comes from CT training dataset, it may not perform well on MRI, but the actual performance will vary case by case. We support three type of segment everything "CT_BODY", "MRI_BODY", and "MRI_BRAIN".
116
+
117
+
- "CT_BODY" is the previous VISTA3D bundle supported 132 CT classes. Same as NV-Segment-CT everything prompts.
118
+
- "MRI_BODY" shares the same 50 label class as TotalsegmentatorMR.
119
+
- "MRI_BRAIN" is trained on skull stripped [LUMIR](https://github.com/JHU-MedImage-Reg/LUMIR_L2R) dataset and will segment 133 brain MRI substructures. We followed [MIR Preprocessing](https://github.com/junyuchen245/MIR/tree/main/tutorials/brain_MRI_preprocessing) tutorials and put the corresponding components into this repo. `All contrasts of brain MRI are supported`
For brain MRI segmentation, skull stripping, bias correction and MNI space alignment is included in the codebase. Skull stripping requires docker enviroments. More details can be found in run_brain_segmentation.sh.
118
140
```bash
119
-
# Run preprocessing, segmentation, and revert results back.
120
-
input=example/brain_t1.nii.gz # change to your file path
121
-
output=example/brain_t1_preprocessed.nii.gz # intermediate results save path
122
-
# segmentation results will be saved to ./eval/$output_trans.nii.gz. User can modify saved name in inference.json file.
Download the [model checkpoint](https://huggingface.co/nvidia/NV-Segment-CT/resolve/main/vista3d_pretrained_model/model_monai1.3.pt) and save it at ./models/model.pt. The researh repo ussed monai1.3 checkpoint while NVSegment-CTMR used monai1.4, which has subtle layer naming difference.
64
+
The researh repo ussed monai1.3 checkpoint while NVSegment-CTMR used monai1.4, which has subtle layer naming difference.
0 commit comments