satpy.multiscene module

MultiScene object to work with multiple timesteps of satellite data.

class satpy.multiscene.MultiScene(scenes=None)[source]

Bases: object

Container for multiple Scene objects.

Initialize MultiScene and validate sub-scenes.

Parameters

scenes (iterable) – Scene objects to operate on (optional)

Note

If the scenes passed to this object are a generator then certain operations performed will try to preserve that generator state. This may limit what properties or methods are available to the user. To avoid this behavior compute the passed generator by converting the passed scenes to a list first: MultiScene(list(scenes)).

property all_same_area

Determine if all contained Scenes have the same ‘area’.

blend(blend_function=<function stack>)[source]

Blend the datasets into one scene.

Note

Blending is not currently optimized for generator-based MultiScene.

crop(*args, **kwargs)[source]

Crop the multiscene and return a new cropped multiscene.

property first_scene

First Scene of this MultiScene object.

classmethod from_files(files_to_sort, reader=None, ensure_all_readers=False, scene_kwargs=None, **kwargs)[source]

Create multiple Scene objects from multiple files.

Parameters
  • files_to_sort (Collection[str]) – files to read

  • reader (str or Collection[str]) – reader or readers to use

  • ensure_all_readers (bool) – If True, limit to scenes where all readers have at least one file. If False (default), include all scenes where at least one reader has at least one file.

  • scene_kwargs (Mapping) – additional arguments to pass on to Scene.__init__() for each created scene.

This uses the satpy.readers.group_files() function to group files. See this function for more details on additional possible keyword arguments. In particular, it is strongly recommended to pass “group_keys” when using multiple instruments.

New in version 0.12.

group(groups)[source]

Group datasets from the multiple scenes.

By default, MultiScene only operates on dataset IDs shared by all scenes. Using this method you can specify groups of datasets that shall be treated equally by MultiScene. Even if their dataset IDs differ (for example because the names or wavelengths are slightly different). Groups can be specified as a dictionary {group_id: dataset_names} where the keys must be of type DataQuery, for example:

groups={
    DataQuery('my_group', wavelength=(10, 11, 12)): ['IR_108', 'B13', 'C13']
}
property is_generator

Contained Scenes are stored as a generator.

load(*args, **kwargs)[source]

Load the required datasets from the multiple scenes.

property loaded_dataset_ids

Union of all Dataset IDs loaded by all children.

resample(destination=None, **kwargs)[source]

Resample the multiscene.

save_animation(filename, datasets=None, fps=10, fill_value=None, batch_size=1, ignore_missing=False, client=True, enh_args=None, **kwargs)[source]

Save series of Scenes to movie (MP4) or GIF formats.

Supported formats are dependent on the imageio library and are determined by filename extension by default.

Note

Starting with imageio 2.5.0, the use of FFMPEG depends on a separate imageio-ffmpeg package.

By default all datasets available will be saved to individual files using the first Scene’s datasets metadata to format the filename provided. If a dataset is not available from a Scene then a black array is used instead (np.zeros(shape)).

This function can use the dask.distributed library for improved performance by computing multiple frames at a time (see batch_size option below). If the distributed library is not available then frames will be generated one at a time, one product at a time.

Parameters
  • filename (str) – Filename to save to. Can include python string formatting keys from dataset .attrs (ex. “{name}_{start_time:%Y%m%d_%H%M%S.gif”)

  • datasets (list) – DataIDs to save (default: all datasets)

  • fps (int) – Frames per second for produced animation

  • fill_value (int) – Value to use instead creating an alpha band.

  • batch_size (int) – Number of frames to compute at the same time. This only has effect if the dask.distributed package is installed. This will default to 1. Setting this to 0 or less will attempt to process all frames at once. This option should be used with care to avoid memory issues when trying to improve performance. Note that this is the total number of frames for all datasets, so when saving 2 datasets this will compute (batch_size / 2) frames for the first dataset and (batch_size / 2) frames for the second dataset.

  • ignore_missing (bool) – Don’t include a black frame when a dataset is missing from a child scene.

  • client (bool or dask.distributed.Client) – Dask distributed client to use for computation. If this is True (default) then any existing clients will be used. If this is False or None then a client will not be created and dask.distributed will not be used. If this is a dask Client object then it will be used for distributed computation.

  • enh_args (Mapping) – Optional, arguments passed to satpy.writers.get_enhanced_image(). If this includes a keyword “decorate”, in any text added to the image, string formatting will be applied based on dataset attributes. For example, passing enh_args={"decorate": {"decorate": [{"text": {"txt": "{start_time:%H:%M}"}}]} will replace the decorated text accordingly.

  • kwargs – Additional keyword arguments to pass to imageio.get_writer.

save_datasets(client=True, batch_size=1, **kwargs)[source]

Run save_datasets on each Scene.

Note that some writers may not be multi-process friendly and may produce unexpected results or fail by raising an exception. In these cases client should be set to False. This is currently a known issue for basic ‘geotiff’ writer work loads.

Parameters
  • batch_size (int) – Number of scenes to compute at the same time. This only has effect if the dask.distributed package is installed. This will default to 1. Setting this to 0 or less will attempt to process all scenes at once. This option should be used with care to avoid memory issues when trying to improve performance.

  • client (bool or dask.distributed.Client) – Dask distributed client to use for computation. If this is True (default) then any existing clients will be used. If this is False or None then a client will not be created and dask.distributed will not be used. If this is a dask Client object then it will be used for distributed computation.

  • kwargs – Additional keyword arguments to pass to save_datasets(). Note compute can not be provided.

property scenes

Get list of Scene objects contained in this MultiScene.

Note

If the Scenes contained in this object are stored in a generator (not list or tuple) then accessing this property will load/iterate through the generator possibly

property shared_dataset_ids

Dataset IDs shared by all children.

satpy.multiscene.add_group_aliases(scenes, groups)[source]

Add aliases for the groups datasets belong to.

satpy.multiscene.stack(datasets)[source]

Overlay series of datasets on top of each other.

satpy.multiscene.timeseries(datasets)[source]

Expand dataset with and concatenate by time dimension.