Skip to content

Commit 1570390

Browse files
committed
chunk performance healpix_increase_refinement_level
1 parent a62a22c commit 1570390

1 file changed

Lines changed: 24 additions & 0 deletions

File tree

cf/field.py

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5286,6 +5286,30 @@ def healpix_increase_refinement_level(self, refinement_level, quantity):
52865286
See CF Appendix F: Grid Mappings.
52875287
https://doi.org/10.5281/zenodo.14274886
52885288

5289+
**Performance**
5290+
5291+
High refinement levels may require the setting of a very large
5292+
Dask chunksize, to prevent a possible run-time failure
5293+
resulting from an attempt to create an excessive amount of
5294+
chunks for the healpix_index coordinates. For instance, with
5295+
the default Dask chunksize of 128 MiB, healpix_index
5296+
coordinates at refinement level 29 would need ~206 billion
5297+
Dask chunks, which is almost certainly more than enough to
5298+
cause a crash. In this case, a Dask chunksize of 1 pebibyte
5299+
results in only 24576 Dask chunks, a much more manageable
5300+
amount::
5301+
5302+
>>> cf.chunksize()
5303+
<CF Constant: 134217728>
5304+
>>> f = cf.example_field(12)
5305+
>>> g = f.healpix_increase_refinement_level(10, 'intensive')
5306+
>>> assert g.coord('healpix_index').data.npartitions == 1
5307+
>>> g = f.healpix_increase_refinement_level(15, 'intensive')
5308+
>>> assert g.coord('healpix_index').data.npartitions == 816
5309+
>>> with cf.chunksize('1 PiB'):
5310+
... g = f.healpix_increase_refinement_level(29, 'intensive')
5311+
... assert g.coord('healpix_index').data.npartitions == 24576
5312+
52895313
.. versionadded:: NEXTVERSION
52905314

52915315
.. seealso:: `healpix_decrease_refinement_level`,

0 commit comments

Comments
 (0)