|
19 | 19 | "Data Release: DP1 <br>\n", |
20 | 20 | "Container Size: Large <br>\n", |
21 | 21 | "LSST Science Pipelines version: Weekly 29.2.0 <br>\n", |
22 | | - "Last verified to run: 2025-09-22 <br>\n", |
| 22 | + "Last verified to run: 2025-10-05 <br>\n", |
23 | 23 | "Repository: <a href=\"https://github.com/lsst/tutorial-notebooks\">github.com/lsst/tutorial-notebooks</a> <br>" |
24 | 24 | ] |
25 | 25 | }, |
|
52 | 52 | "\n", |
53 | 53 | "## 1. Introduction\n", |
54 | 54 | "\n", |
55 | | - "This notebook demonstrates hot to reconstruct per-pixel models of objects that are deblended by the LSST Science Pipeline when building the `Object` table.\n", |
| 55 | + "This notebook demonstrates how to reconstruct per-pixel models of objects that are deblended by the LSST Science Pipeline when building the `Object` table.\n", |
56 | 56 | "\n", |
57 | | - "To measure the photometry of objects in `deep_coadd` images the LSST Science Pipelines use the multi-band deblending scarlet algorithm [Melchior et al. 2018](https://ui.adsabs.harvard.edu/abs/2018A%26C....24..129M/abstract) implemented with optimizations for LSST data in [scarlet lite](https://github.com/lsst/scarlet_lite). The core algorithm models objects in a blend as a collection of components that have the same spectrum in each pixel of their morphology, assuming that flux monotonically decreases from the peak (maximum pixel value) of the component.\n", |
| 57 | + "To measure the photometry of objects in `deep_coadd` images, the LSST Science Pipeline uses the multi-band deblending scarlet algorithm [Melchior et al. 2018](https://ui.adsabs.harvard.edu/abs/2018A%26C....24..129M/abstract) implemented with optimizations for LSST data in [Scarlet Lite](https://github.com/lsst/scarlet_lite). The core algorithm models objects in a blend as a collection of components that have the same spectrum in each pixel of their morphology, assuming that flux monotonically decreases from the peak (maximum pixel value) of the component.\n", |
58 | 58 | "\n", |
59 | 59 | "This tutorial will begin with the coordinates of an object of interest and show how to obtain its model and the model of the other sources that are extracted from the same blend.\n", |
60 | 60 | "\n", |
61 | 61 | "**Note**: There have been updates to the deblending implementation since DP1 so _expect the API to be slightly different (improved) in future data releases_, including having more information available to understand how blended an object is with its neighbors and useful cuts to remove potentially more difficult blends from analysis.\n", |
62 | 62 | "\n", |
63 | 63 | "**References:** \n", |
64 | 64 | "- A general discussion of blending impacts to LSST can be found in this article titled \"<a href=\"https://lss.fnal.gov/archive/2021/pub/fermilab-pub-21-598-ppd.pdf\">The challenge of blending in large sky surveys</a>\" by Melchior et al. 2021. \n", |
65 | | - "- Some more in-depth presentations on the deblending implementation in the LSST Science Pipelines are available from recorded talks during the Rubin Project and Community Workshop 2022's session on \"<a href=\"https://project.lsst.org/meetings/rubin2022/agenda/deblending-plans-and-challenges\">Deblending Plans and Challenges</a>\". \n", |
| 65 | + "- Some more in-depth presentations on the deblending implementation in the LSST Science Pipeline are available from recorded talks during the Rubin Project and Community Workshop 2022's session on \"<a href=\"https://project.lsst.org/meetings/rubin2022/agenda/deblending-plans-and-challenges\">Deblending Plans and Challenges</a>\". \n", |
66 | 66 | "- Further information about the available deblending flags, and the processes that produce the deblender data products discussed in this tutorial\n", |
67 | 67 | "can be found in this <a href=\"https://pipelines.lsst.io/modules/lsst.pipe.tasks/deblending-flags-overview.html\">overview of the deblending flags</a>.\n", |
68 | 68 | "\n", |
|
78 | 78 | "source": [ |
79 | 79 | "### 1.1. Import packages\n", |
80 | 80 | "\n", |
81 | | - "The `lsst.afw.image` provide access to some of the extended products created by the deblender pipeline. The `lsst.afw.display` library provides access to image visualization routines and the `lsst.daf.butler` library is used to access data products via the butler. Finally, `lsst.scarlet.lite` and `lsst.meas.extensions.scarlet` are two packages containing the scarlet software infrastructure, to enable displaying and analyzing deblended models and object footprints. " |
| 81 | + "`lsst.afw.image` provides access to some of the extended products created by the deblender pipeline. The `lsst.afw.display` library provides access to image visualization routines and the `lsst.daf.butler` library is used to access data products via the butler. Finally, `lsst.scarlet.lite` and `lsst.meas.extensions.scarlet` are two packages containing the scarlet software infrastructure, to enable displaying and analyzing deblended models and object footprints. " |
82 | 82 | ] |
83 | 83 | }, |
84 | 84 | { |
|
184 | 184 | "source": [ |
185 | 185 | "### 2.1. Query for deblended objects\n", |
186 | 186 | "\n", |
187 | | - "Re-use a blend from notebook 206-1 on deblender data products, whose `parentObjectId` 611256447031839519. The blend is near the center of the ECDFS field. Define the center coordinates to enable a faster search for all objects that were deblended from this parent." |
| 187 | + "Re-use a known blend whose `parentObjectId` 611256447031839519. The blend is near the center of the ECDFS field. Define the center coordinates to enable a faster search for all objects that were deblended from this parent." |
188 | 188 | ] |
189 | 189 | }, |
190 | 190 | { |
|
286 | 286 | "metadata": {}, |
287 | 287 | "outputs": [], |
288 | 288 | "source": [ |
289 | | - "def cutout_coadd(butler, ra, dec, band='r', datasetType='deepCoadd',\n", |
| 289 | + "def cutout_coadd(butler, ra, dec, band='i', datasetType='deepCoadd',\n", |
290 | 290 | " skymap=None, cutoutSideLength=51, **kwargs):\n", |
291 | 291 | " \"\"\"\n", |
292 | 292 | " Produce a cutout from a coadd at the given ra, dec position.\n", |
|
331 | 331 | " parameters = {'bbox': bbox}\n", |
332 | 332 | "\n", |
333 | 333 | " query = \"\"\"band.name = '{}' AND patch = {} AND tract = {}\n", |
334 | | - " \"\"\".format('i', patch, tract)\n", |
| 334 | + " \"\"\".format(band, patch, tract)\n", |
335 | 335 | " print(query)\n", |
336 | 336 | "\n", |
337 | 337 | " dataset_refs = butler.query_datasets(\"deep_coadd\", where=query)\n", |
|
399 | 399 | "id": "4dc608a8-cb6b-48c8-9aa9-e30338da2070", |
400 | 400 | "metadata": {}, |
401 | 401 | "source": [ |
402 | | - "> Figure 1: The cutout image displayed in greyscale, with red circles marking the deblended children. \n" |
| 402 | + "> Figure 1: The cutout of the i-band `deep_coadd` image displayed in greyscale, with red circles marking the deblended children. \n" |
403 | 403 | ] |
404 | 404 | }, |
405 | 405 | { |
|
491 | 491 | "id": "1eba1318-9e93-409f-a82b-9db6deff25bc", |
492 | 492 | "metadata": {}, |
493 | 493 | "source": [ |
494 | | - "The LSST science pipelines use persistable `dataclass` objects as intermediaries between the storage containers and python classes. In the case of scarlet lite blends this is the `ScarletBlendData` class. Some attributes of the blend model can be retrieved from `modelData`.\n", |
| 494 | + "The LSST science pipelines use persistable `dataclass` objects as intermediaries between the storage containers and python classes. In the case of Scarlet Lite blends, this is the `ScarletBlendData` class. Some attributes of the blend model can be retrieved from `modelData`.\n", |
495 | 495 | "\n", |
496 | 496 | "In order to reconstruct the blend, the model PSF from the deconvolved model space is also required. The model PSF is the same for all blends, so extract that here.\n", |
497 | 497 | "\n", |
|
529 | 529 | "from numpy.typing import DTypeLike\n", |
530 | 530 | "from lsst.scarlet.lite import Box, Blend, Observation\n", |
531 | 531 | "\n", |
| 532 | + "\n", |
532 | 533 | "def minimal_data_to_blend(cls, model_psf: np.ndarray, dtype: DTypeLike) -> Blend:\n", |
| 534 | + "\n", |
533 | 535 | " \"\"\"Convert the storage data model into a scarlet lite blend\n", |
534 | 536 | "\n", |
535 | 537 | " Parameters\n", |
|
554 | 556 | " )\n", |
555 | 557 | " return cls.to_blend(observation)\n", |
556 | 558 | "\n", |
| 559 | + "\n", |
557 | 560 | "import lsst.scarlet as scarlet\n", |
558 | 561 | "scarlet.lite.io.ScarletBlendData.minimal_data_to_blend = minimal_data_to_blend" |
559 | 562 | ] |
|
602 | 605 | "id": "2f0198a2-7081-41c5-b9dc-f7a019f55f46", |
603 | 606 | "metadata": {}, |
604 | 607 | "source": [ |
605 | | - "Printing `blend_data.bands` shows that this patch and tract in ECDFS was observed with all 6 bands but they are not in the order of increasing wavelength. To ensure that expected order, specify `model[tuple(\"ugrizy\")]` when retrieving the blend model to display the image with the expected color assignment in the cell below. If fewer bands of coverage exist (as is the case for some DP1 fields), simply replace this by the combination of observed bands, e.g. `tuple(\"griz\")`. We do this here as the `u` and `y` bands add unnecessary noise that makes it difficult to see the underlying objects.\n", |
| 608 | + "Printing `blend_data.bands` shows that this patch and tract in ECDFS was observed with all 6 bands but they are not in the order of increasing wavelength. To ensure that expected order, specify `model[tuple(\"ugrizy\")]` when retrieving the blend model to display the image with the expected color assignment in the cell below. If fewer bands of coverage exist (as is the case for some DP1 fields), simply replace this by the combination of observed bands, e.g. `tuple(\"griz\")`. For this example, select only 4 filters and exclude the `u` and `y` bands, which add unnecessary noise that makes it difficult to see the underlying objects.\n", |
606 | 609 | "\n", |
607 | | - "Note: this will change the order of the bands displayed in the model but **not** in the `blend` instance. After DP1 a change is planned so that `blend = blend[display_bands]` can be used to reorganize all of the models so that they are displayed in a new order." |
| 610 | + "Note: this will change the order of the bands displayed in the model but **not** in the `blend` instance. After DP1, a change is planned so that `blend = blend[display_bands]` can be used to reorganize all of the models so that they are displayed in a new order." |
608 | 611 | ] |
609 | 612 | }, |
610 | 613 | { |
|
623 | 626 | "id": "d9f03503-8a25-4be4-b5a8-f9558eb1b769", |
624 | 627 | "metadata": {}, |
625 | 628 | "source": [ |
626 | | - "Finally, plot a color image of the blend model. Use the `AsinhMapping` to normalize: the asinh stretch preserves colors independent of brightness (for more information see <a href=\"https://ui.adsabs.harvard.edu/abs/2004PASP..116..133L\">Lupton et al. 2024</a>). By default, Scarlet Lite will map the input filters into an RGB scaling where shorter wavelength filters map to blue and longer wavelength map to red (and assumes the order in the model is set as such, defined in the previous cell). This relative weighting between filters to RGB can be altered with the `channel_map` keyword to `img_to_rgb` (see documentation <a href=\"https://pmelchior.github.io/scarlet/tutorials/display.html\">here</a>)." |
| 629 | + "Finally, plot a color image of the blend model. Use the `AsinhMapping` to normalize: the asinh stretch preserves colors independent of brightness (for more information see <a href=\"https://ui.adsabs.harvard.edu/abs/2004PASP..116..133L\">Lupton et al. 2024</a>). By default, Scarlet Lite will map the input filters into an red green blue (RGB) scaling where shorter wavelength filters map to blue and longer wavelength map to red (and assumes the order in the model is set as such, defined in the previous cell). This relative weighting between filters to RGB can be altered with the `channel_map` keyword to `img_to_rgb` (see documentation <a href=\"https://pmelchior.github.io/scarlet/tutorials/display.html\">here</a>)." |
627 | 630 | ] |
628 | 631 | }, |
629 | 632 | { |
|
674 | 677 | "id": "9c1c15ff-a8b6-4110-8bec-f0a5d5ba6a63", |
675 | 678 | "metadata": {}, |
676 | 679 | "source": [ |
677 | | - "Build the bounding box of the parent (this allows extraction of a smaller region from the full coadd).\n", |
678 | | - "- The bands that we wish to load. Note: this needs to be in the same order as the `blend` band order in DP1.\n", |
679 | 680 | "\n", |
680 | 681 | "The tract and patch were already identified as part of the deep_coadd dataId in Section 3. Here, all that is needed is extract the bounding box of the `blend` and convert it from a `lsst.scarlet.lite.Box` to an `lsst.afw.Box2I`\n", |
681 | | - "\n" |
| 682 | + "\n", |
| 683 | + "In the cell below, build the bounding box of the parent (this allows extraction of a smaller region from the full coadd).\n" |
682 | 684 | ] |
683 | 685 | }, |
684 | 686 | { |
|
699 | 701 | "source": [ |
700 | 702 | "Load the subset of the image that overlaps with the blend, then use the `blend_data` and the `observation` to create a full `Blend`. All of the model information is the same as the information from the `Blend` that we created earlier. Here, attach real `Observation` data as opposed to an empty `Observation` (as was done in Section 3.2). Notice that the `observed_bands` are passed here, not the `display_bands`, as these bands must match the order of the bands in the `blend`.\n", |
701 | 703 | "\n", |
702 | | - "Model residuals compared to the observations are of interest for exploring how well the blend model matches the data. To do this, first load a multiband exposure as the input observation. Use the patch and tract to reconstruct the multiband `deep_coadd` for this blend.\n" |
| 704 | + "Model residuals compared to the observations are of interest for exploring how well the blend model matches the data. To do this, first load a multiband exposure as the input observation. Use the `patch` and `tract` to reconstruct the multiband `deep_coadd` for this blend.\n" |
703 | 705 | ] |
704 | 706 | }, |
705 | 707 | { |
|
716 | 718 | "observation = mes.utils.buildObservation(modelPsf=model_psf,\n", |
717 | 719 | " psfCenter=bbox.getCenter(),\n", |
718 | 720 | " mExposure=mCoadd,\n", |
719 | | - " )\n", |
| 721 | + " )\n", |
720 | 722 | "\n", |
721 | 723 | "blend = blend_data.to_blend(observation)" |
722 | 724 | ] |
|
761 | 763 | "\n", |
762 | 764 | "Now display the scene. The function scarlet `show_scene` will include the model footprints, the model rendered (i.e. convolved with the PSF), the locations on the observed multiband image, and residuals between the real image and the model. Model residuals compared to the observations are of interest for exploring how well the blend model matches the data. \n", |
763 | 765 | "\n", |
764 | | - "To normalize the image use the Lupton red green blue (RGB) AsinhMapping, which preserves the observed colors.\n", |
| 766 | + "To normalize the image, use the Lupton RGB AsinhMapping, which preserves the observed colors.\n", |
765 | 767 | "\n", |
766 | 768 | "Note: You'll notice that the colors match the processed color order, not the correct band order. In the future passing `blend[display_bands]` should display everything in the correct order.\n" |
767 | 769 | ] |
|
854 | 856 | "source": [ |
855 | 857 | "### 5.2 Creating the flux re-distributed models\n", |
856 | 858 | "\n", |
857 | | - "In the LSST science pipelines, measurements are not made on the scarlet models directly. Imperfections in the PSF measurement and deviations of bright galaxies from a simple two-component monotonic morphology solution (see Melchior paper cited above) result in improper measurements. THIS STATEMENT TBD: Instead, a deblending algorithm based on the SDSS deblender is employed, using the scarlet models as templates, and flux is re-distributed from the input image based on the ratio of values for overlapping templates in an image." |
| 859 | + "In the LSST science pipeline, measurements are not made on the Scarlet models directly. Imperfections in the PSF measurement and deviations of bright galaxies from a simple two-component monotonic morphology solution (see Melchior paper cited above) result in improper measurements. THIS STATEMENT TBD: Instead, a deblending algorithm based on the SDSS deblender is employed, using the scarlet models as templates, and flux is re-distributed from the input image based on the ratio of values for overlapping templates in an image." |
858 | 860 | ] |
859 | 861 | }, |
860 | 862 | { |
|
0 commit comments