|
96 | 96 | "source": [ |
97 | 97 | "# Pre-trained Swin UNETR Encoder\n", |
98 | 98 | "\n", |
99 | | - "We use weights from self-supervised pre-training of Swin UNETR encoder (3D Swin Tranformer) on a cohort of 5050 CT scans from publicly available datasets. The encoder is pre-trained using reconstructin, rotation prediction and contrastive learning pre-text tasks as shown below. For more details, please refer to [1] (CVPR paper) and see this [repository](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/Pretrain). \n", |
| 99 | + "We use weights from self-supervised pre-training of Swin UNETR encoder (3D Swin Transformer) on a cohort of 5050 CT scans from publicly available datasets. The encoder is pre-trained using reconstructin, rotation prediction and contrastive learning pre-text tasks as shown below. For more details, please refer to [1] (CVPR paper) and see this [repository](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/Pretrain). \n", |
100 | 100 | "\n", |
101 | 101 | "Please download the pre-trained weights from this [link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt) and place it in the root directory of this tutorial. \n", |
102 | 102 | "\n", |
|
243 | 243 | " b_max=1.0,\n", |
244 | 244 | " clip=True,\n", |
245 | 245 | " ),\n", |
246 | | - " CropForegroundd(keys=[\"image\", \"label\"], source_key=\"image\"),\n", |
| 246 | + " CropForegroundd(keys=[\"image\", \"label\"], source_key=\"image\", allow_smaller=True),\n", |
247 | 247 | " Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n", |
248 | 248 | " Spacingd(\n", |
249 | 249 | " keys=[\"image\", \"label\"],\n", |
|
292 | 292 | " [\n", |
293 | 293 | " LoadImaged(keys=[\"image\", \"label\"], ensure_channel_first=True),\n", |
294 | 294 | " ScaleIntensityRanged(keys=[\"image\"], a_min=-175, a_max=250, b_min=0.0, b_max=1.0, clip=True),\n", |
295 | | - " CropForegroundd(keys=[\"image\", \"label\"], source_key=\"image\"),\n", |
| 295 | + " CropForegroundd(keys=[\"image\", \"label\"], source_key=\"image\", allow_smaller=True),\n", |
296 | 296 | " Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n", |
297 | 297 | " Spacingd(\n", |
298 | 298 | " keys=[\"image\", \"label\"],\n", |
|
439 | 439 | "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", |
440 | 440 | "\n", |
441 | 441 | "model = SwinUNETR(\n", |
442 | | - " img_size=(96, 96, 96),\n", |
443 | 442 | " in_channels=1,\n", |
444 | 443 | " out_channels=14,\n", |
445 | 444 | " feature_size=48,\n", |
|
453 | 452 | "source": [ |
454 | 453 | "### Initialize Swin UNETR encoder from self-supervised pre-trained weights\n", |
455 | 454 | "\n", |
456 | | - "In this section, we intialize the Swin UNETR encoder from pre-trained weights. The weights can be downloaded using the wget command below, or by following [this link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt) to GitHub. If training from scratch is desired, please skip this section." |
| 455 | + "In this section, we initialize the Swin UNETR encoder from pre-trained weights. The weights can be downloaded using the wget command below, or by following [this link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt) to GitHub. If training from scratch is desired, please skip this section." |
457 | 456 | ] |
458 | 457 | }, |
459 | 458 | { |
|
0 commit comments