satpy.multiscene._multiscene module
MultiScene object to work with multiple timesteps of satellite data.
- class satpy.multiscene._multiscene.MultiScene(scenes=None)[source]
Bases:
object
Container for multiple Scene objects.
Initialize MultiScene and validate sub-scenes.
- Parameters:
scenes (iterable) – Scene objects to operate on (optional)
Note
If the scenes passed to this object are a generator then certain operations performed will try to preserve that generator state. This may limit what properties or methods are available to the user. To avoid this behavior compute the passed generator by converting the passed scenes to a list first:
MultiScene(list(scenes))
.- static _call_scene_func(gen, func_name, create_new_scene, *args, **kwargs)[source]
Abstract method for running a Scene method on each Scene.
- _distribute_frame_compute(writers, frame_keys, frames_to_write, client, batch_size=1)[source]
Use
dask.distributed
to compute multiple frames at a time.
- _distribute_save_datasets(scenes_iter, client, batch_size=1, **kwargs)[source]
Distribute save_datasets across a cluster.
- static _format_decoration(ds, decorate)[source]
Maybe format decoration.
If the nested dictionary in decorate (argument to
save_animation
) contains a text to be added, format those based on dataset parameters.
- _generate_scene_func(gen, func_name, create_new_scene, *args, **kwargs)[source]
Abstract method for running a Scene method on each Scene.
Additionally, modifies current MultiScene or creates a new one if needed.
- _get_animation_frames(all_datasets, shape, fill_value=None, ignore_missing=False, enh_args=None)[source]
Create enhanced image frames to save to a file.
- _get_animation_info(all_datasets, filename, fill_value=None)[source]
Determine filename and shape of animation to be created.
- _get_single_frame(ds, enh_args, fill_value)[source]
Get single frame from dataset.
Yet a single image frame from a dataset.
- _get_writers_and_frames(filename, datasets, fill_value, ignore_missing, enh_args, imio_args)[source]
Get writers and frames.
Helper function for save_animation.
- static _simple_frame_compute(writers, frame_keys, frames_to_write)[source]
Compute frames the plain dask way.
- property all_same_area
Determine if all contained Scenes have the same ‘area’.
- blend(blend_function: Callable[[...], DataArray] | None = None) Scene [source]
Blend the datasets into one scene.
Reduce the
MultiScene
to a singleScene
. Datasets occurring in each scene will be passed to a blending function, which shall take as input a list of datasets (xarray.DataArray
objects) and shall return a single dataset (xarray.DataArray
object). The blend method then assigns those datasets to the blended scene.Blending functions provided in this module are
stack()
(the default),timeseries()
, andtemporal_rgb()
, but the Python built-in functionsum()
also works and may be appropriate for some types of data.Note
Blending is not currently optimized for generator-based MultiScene.
- property first_scene
First Scene of this MultiScene object.
- classmethod from_files(files_to_sort: Collection[str], reader: str | Collection[str] | None = None, ensure_all_readers: bool = False, scene_kwargs: Mapping | None = None, **kwargs)[source]
Create multiple Scene objects from multiple files.
- Parameters:
files_to_sort – files to read
reader – reader or readers to use
ensure_all_readers – If True, limit to scenes where all readers have at least one file. If False (default), include all scenes where at least one reader has at least one file.
scene_kwargs – additional arguments to pass on to
Scene.__init__()
for each created scene.
This uses the
satpy.readers.group_files()
function to group files. See this function for more details on additional possible keyword arguments. In particular, it is strongly recommended to pass “group_keys” when using multiple instruments.Added in version 0.12.
- group(groups)[source]
Group datasets from the multiple scenes.
By default, MultiScene only operates on dataset IDs shared by all scenes. Using this method you can specify groups of datasets that shall be treated equally by MultiScene. Even if their dataset IDs differ (for example because the names or wavelengths are slightly different). Groups can be specified as a dictionary {group_id: dataset_names} where the keys must be of type DataQuery, for example:
groups={ DataQuery('my_group', wavelength=(10, 11, 12)): ['IR_108', 'B13', 'C13'] }
- property is_generator
Contained Scenes are stored as a generator.
- property loaded_dataset_ids
Union of all Dataset IDs loaded by all children.
- save_animation(filename, datasets=None, fps=10, fill_value=None, batch_size=1, ignore_missing=False, client=True, enh_args=None, **kwargs)[source]
Save series of Scenes to movie (MP4) or GIF formats.
Supported formats are dependent on the imageio library and are determined by filename extension by default.
Note
Starting with
imageio
2.5.0, the use of FFMPEG depends on a separateimageio-ffmpeg
package.By default all datasets available will be saved to individual files using the first Scene’s datasets metadata to format the filename provided. If a dataset is not available from a Scene then a black array is used instead (np.zeros(shape)).
This function can use the
dask.distributed
library for improved performance by computing multiple frames at a time (see batch_size option below). If the distributed library is not available then frames will be generated one at a time, one product at a time.- Parameters:
filename (str) – Filename to save to. Can include python string formatting keys from dataset
.attrs
(ex. “{name}_{start_time:%Y%m%d_%H%M%S.gif”)datasets (list) – DataIDs to save (default: all datasets)
fps (int) – Frames per second for produced animation
fill_value (int) – Value to use instead creating an alpha band.
batch_size (int) – Number of frames to compute at the same time. This only has effect if the dask.distributed package is installed. This will default to 1. Setting this to 0 or less will attempt to process all frames at once. This option should be used with care to avoid memory issues when trying to improve performance. Note that this is the total number of frames for all datasets, so when saving 2 datasets this will compute
(batch_size / 2)
frames for the first dataset and(batch_size / 2)
frames for the second dataset.ignore_missing (bool) – Don’t include a black frame when a dataset is missing from a child scene.
client (bool or dask.distributed.Client) – Dask distributed client to use for computation. If this is
True
(default) then any existing clients will be used. If this isFalse
orNone
then a client will not be created anddask.distributed
will not be used. If this is a daskClient
object then it will be used for distributed computation.enh_args (Mapping) – Optional, arguments passed to
satpy.writers.get_enhanced_image()
. If this includes a keyword “decorate”, in any text added to the image, string formatting will be applied based on dataset attributes. For example, passingenh_args={"decorate": {"decorate": [{"text": {"txt": "{start_time:%H:%M}"}}]}
will replace the decorated text accordingly.kwargs – Additional keyword arguments to pass to imageio.get_writer.
- save_datasets(client=True, batch_size=1, **kwargs)[source]
Run save_datasets on each Scene.
Note that some writers may not be multi-process friendly and may produce unexpected results or fail by raising an exception. In these cases
client
should be set toFalse
. This is currently a known issue for basic ‘geotiff’ writer work loads.- Parameters:
batch_size (int) – Number of scenes to compute at the same time. This only has effect if the dask.distributed package is installed. This will default to 1. Setting this to 0 or less will attempt to process all scenes at once. This option should be used with care to avoid memory issues when trying to improve performance.
client (bool or dask.distributed.Client) – Dask distributed client to use for computation. If this is
True
(default) then any existing clients will be used. If this isFalse
orNone
then a client will not be created anddask.distributed
will not be used. If this is a daskClient
object then it will be used for distributed computation.kwargs – Additional keyword arguments to pass to
save_datasets()
. Notecompute
can not be provided.
- property scenes
Get list of Scene objects contained in this MultiScene.
Note
If the Scenes contained in this object are stored in a generator (not list or tuple) then accessing this property will load/iterate through the generator possibly
Dataset IDs shared by all children.
- class satpy.multiscene._multiscene._GroupAliasGenerator(scene, groups)[source]
Bases:
object
Add group aliases to a scene.
Initialize the alias generator.
- class satpy.multiscene._multiscene._SceneGenerator(scene_gen)[source]
Bases:
object
Fancy way of caching Scenes from a generator.
- property first
First element in the generator.
- satpy.multiscene._multiscene._group_datasets_in_scenes(scenes, groups)[source]
Group different datasets in multiple scenes by adding aliases.
- Parameters:
scenes (iterable) – Scenes to be processed.
groups (dict) –
Groups of datasets that shall be treated equally by MultiScene. Keys specify the groups, values specify the dataset names to be grouped. For example:
from satpy import DataQuery groups = {DataQuery(name='odd'): ['ds1', 'ds3'], DataQuery(name='even'): ['ds2', 'ds4']}