satpy package

Subpackages

Submodules

satpy.config module

Satpy Configuration directory and file handling.

satpy.config.check_satpy(readers=None, writers=None, extras=None)[source]

Check the satpy readers and writers for correct installation.

Parameters
  • readers (list or None) – Limit readers checked to those specified

  • writers (list or None) – Limit writers checked to those specified

  • extras (list or None) – Limit extras checked to those specified

Returns: bool

True if all specified features were successfully loaded.

satpy.config.check_yaml_configs(configs, key)[source]

Get a diagnostic for the yaml configs.

key is the section to look for to get a name for the config at hand.

satpy.config.config_search_paths(filename, *search_dirs, **kwargs)[source]

Get the environment variable value every time (could be set dynamically).

satpy.config.get_config(filename, *search_dirs, **kwargs)[source]

Blends the different configs, from package defaults to .

satpy.config.get_config_path(filename, *search_dirs)[source]

Get the appropriate path for a filename, in that order: filename, ., PPP_CONFIG_DIR, package’s etc dir.

satpy.config.get_entry_points_config_dirs(name)[source]

Get the config directories for all entry points of given name.

satpy.config.get_environ_ancpath(default='.')[source]

Get the ancpath.

satpy.config.get_environ_config_dir(default=None)[source]

Get the config dir.

satpy.config.glob_config(pattern, *search_dirs)[source]

Return glob results for all possible configuration locations.

Note: This method does not check the configuration “base” directory if the pattern includes a subdirectory.

This is done for performance since this is usually used to find all configs for a certain component.

satpy.config.recursive_dict_update(d, u)[source]

Recursive dictionary update.

Copied from:

satpy.config.runtime_import(object_path)[source]

Import at runtime.

satpy.dependency_tree module

Implementation of a dependency tree.

class satpy.dependency_tree.DependencyTree(readers, compositors, modifiers, available_only=False)[source]

Bases: satpy.dependency_tree.Tree

Structure to discover and store Dataset dependencies.

Used primarily by the Scene object to organize dependency finding. Dependencies are stored used a series of Node objects which this class is a subclass of.

Collect Dataset generating information.

Collect the objects that generate and have information about Datasets including objects that may depend on certain Datasets being generated. This includes readers, compositors, and modifiers.

Composites and modifiers are defined per-sensor. If multiple sensors are available, compositors and modifiers are searched for in sensor alphabetical order.

Parameters
  • readers (dict) – Reader name -> Reader Object

  • compositors (dict) – Sensor name -> Composite ID -> Composite Object

  • modifiers (dict) – Sensor name -> Modifier name -> (Modifier Class, modifier options)

  • available_only (bool) – Whether only reader’s available/loadable datasets should be used when searching for dependencies (True) or use all known/configured datasets regardless of whether the necessary files were provided to the reader (False). Note that when False loadable variations of a dataset will have priority over other known variations. Default is False.

copy()[source]

Copy this node tree.

Note all references to readers are removed. This is meant to avoid tree copies accessing readers that would return incompatible (Area) data. Theoretically it should be possible for tree copies to request compositor or modifier information as long as they don’t depend on any datasets not already existing in the dependency tree.

get_compositor(key)[source]

Get a compositor.

get_modifier(comp_id)[source]

Get a modifer.

populate_with_keys(dataset_keys: set, query=None)[source]

Populate the dependency tree.

Parameters
  • dataset_keys (set) – Strings, DataIDs, DataQuerys to find dependencies for

  • query (DataQuery) – Additional filter parameters. See satpy.readers.get_key for more details.

Returns

Root node of the dependency tree and a set of unknown datasets

Return type

(Node, set)

class satpy.dependency_tree.Tree[source]

Bases: object

A tree implementation.

Set up the tree.

add_child(parent, child)[source]

Add a child to the tree.

add_leaf(ds_id, parent=None)[source]

Add a leaf to the tree.

contains(item)[source]

Check contains when we know the exact DataID or DataQuery.

empty_node = <Node ('__EMPTY_LEAF_SENTINEL__')>
getitem(item)[source]

Get Node when we know the exact DataID or DataQuery.

leaves(nodes=None, unique=True)[source]

Get the leaves of the tree starting at the root.

Parameters
  • nodes (iterable) – limit leaves for these node names

  • unique – only include individual leaf nodes once

Returns

list of leaf nodes

trunk(nodes=None, unique=True)[source]

Get the trunk nodes of the tree starting at this root.

Parameters
  • nodes (iterable) – limit trunk nodes to the names specified or the children of them that are also trunk nodes.

  • unique – only include individual trunk nodes once

Returns

list of trunk nodes

satpy.multiscene module

MultiScene object to work with multiple timesteps of satellite data.

class satpy.multiscene.MultiScene(scenes=None)[source]

Bases: object

Container for multiple Scene objects.

Initialize MultiScene and validate sub-scenes.

Parameters

scenes (iterable) – Scene objects to operate on (optional)

Note

If the scenes passed to this object are a generator then certain operations performed will try to preserve that generator state. This may limit what properties or methods are available to the user. To avoid this behavior compute the passed generator by converting the passed scenes to a list first: MultiScene(list(scenes)).

property all_same_area

Determine if all contained Scenes have the same ‘area’.

blend(blend_function=<function stack>)[source]

Blend the datasets into one scene.

Note

Blending is not currently optimized for generator-based MultiScene.

crop(*args, **kwargs)[source]

Crop the multiscene and return a new cropped multiscene.

property first_scene

First Scene of this MultiScene object.

classmethod from_files(files_to_sort, reader=None, ensure_all_readers=False, scene_kwargs=None, **kwargs)[source]

Create multiple Scene objects from multiple files.

Parameters
  • files_to_sort (Collection[str]) – files to read

  • reader (str or Collection[str]) – reader or readers to use

  • ensure_all_readers (bool) – If True, limit to scenes where all readers have at least one file. If False (default), include all scenes where at least one reader has at least one file.

  • scene_kwargs (Mapping) – additional arguments to pass on to Scene.__init__() for each created scene.

This uses the satpy.readers.group_files() function to group files. See this function for more details on additional possible keyword arguments. In particular, it is strongly recommended to pass “group_keys” when using multiple instruments.

New in version 0.12.

group(groups)[source]

Group datasets from the multiple scenes.

By default, MultiScene only operates on dataset IDs shared by all scenes. Using this method you can specify groups of datasets that shall be treated equally by MultiScene. Even if their dataset IDs differ (for example because the names or wavelengths are slightly different). Groups can be specified as a dictionary {group_id: dataset_names} where the keys must be of type DataQuery, for example:

groups={
    DataQuery('my_group', wavelength=(10, 11, 12)): ['IR_108', 'B13', 'C13']
}
property is_generator

Contained Scenes are stored as a generator.

load(*args, **kwargs)[source]

Load the required datasets from the multiple scenes.

property loaded_dataset_ids

Union of all Dataset IDs loaded by all children.

resample(destination=None, **kwargs)[source]

Resample the multiscene.

save_animation(filename, datasets=None, fps=10, fill_value=None, batch_size=1, ignore_missing=False, client=True, enh_args=None, **kwargs)[source]

Save series of Scenes to movie (MP4) or GIF formats.

Supported formats are dependent on the imageio library and are determined by filename extension by default.

Note

Starting with imageio 2.5.0, the use of FFMPEG depends on a separate imageio-ffmpeg package.

By default all datasets available will be saved to individual files using the first Scene’s datasets metadata to format the filename provided. If a dataset is not available from a Scene then a black array is used instead (np.zeros(shape)).

This function can use the dask.distributed library for improved performance by computing multiple frames at a time (see batch_size option below). If the distributed library is not available then frames will be generated one at a time, one product at a time.

Parameters
  • filename (str) – Filename to save to. Can include python string formatting keys from dataset .attrs (ex. “{name}_{start_time:%Y%m%d_%H%M%S.gif”)

  • datasets (list) – DataIDs to save (default: all datasets)

  • fps (int) – Frames per second for produced animation

  • fill_value (int) – Value to use instead creating an alpha band.

  • batch_size (int) – Number of frames to compute at the same time. This only has effect if the dask.distributed package is installed. This will default to 1. Setting this to 0 or less will attempt to process all frames at once. This option should be used with care to avoid memory issues when trying to improve performance. Note that this is the total number of frames for all datasets, so when saving 2 datasets this will compute (batch_size / 2) frames for the first dataset and (batch_size / 2) frames for the second dataset.

  • ignore_missing (bool) – Don’t include a black frame when a dataset is missing from a child scene.

  • client (bool or dask.distributed.Client) – Dask distributed client to use for computation. If this is True (default) then any existing clients will be used. If this is False or None then a client will not be created and dask.distributed will not be used. If this is a dask Client object then it will be used for distributed computation.

  • enh_args (Mapping) – Optional, arguments passed to satpy.writers.get_enhanced_image(). If this includes a keyword “decorate”, in any text added to the image, string formatting will be applied based on dataset attributes. For example, passing enh_args={"decorate": {"decorate": [{"text": {"txt": "{start_time:%H:%M}"}}]} will replace the decorated text accordingly.

  • kwargs – Additional keyword arguments to pass to imageio.get_writer.

save_datasets(client=True, batch_size=1, **kwargs)[source]

Run save_datasets on each Scene.

Note that some writers may not be multi-process friendly and may produce unexpected results or fail by raising an exception. In these cases client should be set to False. This is currently a known issue for basic ‘geotiff’ writer work loads.

Parameters
  • batch_size (int) – Number of scenes to compute at the same time. This only has effect if the dask.distributed package is installed. This will default to 1. Setting this to 0 or less will attempt to process all scenes at once. This option should be used with care to avoid memory issues when trying to improve performance.

  • client (bool or dask.distributed.Client) – Dask distributed client to use for computation. If this is True (default) then any existing clients will be used. If this is False or None then a client will not be created and dask.distributed will not be used. If this is a dask Client object then it will be used for distributed computation.

  • kwargs – Additional keyword arguments to pass to save_datasets(). Note compute can not be provided.

property scenes

Get list of Scene objects contained in this MultiScene.

Note

If the Scenes contained in this object are stored in a generator (not list or tuple) then accessing this property will load/iterate through the generator possibly

property shared_dataset_ids

Dataset IDs shared by all children.

satpy.multiscene.add_group_aliases(scenes, groups)[source]

Add aliases for the groups datasets belong to.

satpy.multiscene.stack(datasets)[source]

Overlay series of datasets on top of each other.

satpy.multiscene.timeseries(datasets)[source]

Expand dataset with and concatenate by time dimension.

satpy.node module

Nodes to build trees.

class satpy.node.CompositorNode(compositor)[source]

Bases: satpy.node.Node

Implementation of a compositor-specific node.

Set up the node.

add_optional_nodes(children)[source]

Add nodes to the optional field.

add_required_nodes(children)[source]

Add nodes to the required field.

property optional_nodes

Get the optional nodes.

property required_nodes

Get the required nodes.

exception satpy.node.MissingDependencies(missing_dependencies, *args, **kwargs)[source]

Bases: RuntimeError

Exception when dependencies are missing.

Set up the exception.

class satpy.node.Node(name, data=None)[source]

Bases: object

A node object.

Init the node object.

add_child(obj)[source]

Add a child to the node.

copy(node_cache=None)[source]

Make a copy of the node.

display(previous=0, include_data=False)[source]

Display the node.

flatten(d=None)[source]

Flatten tree structure to a one level dictionary.

Parameters

d (dict, optional) – output dictionary to update

Returns

Node.name -> Node. The returned dictionary includes the

current Node and all its children.

Return type

dict

property is_leaf

Check if the node is a leaf.

leaves(unique=True)[source]

Get the leaves of the tree starting at this root.

trunk(unique=True)[source]

Get the trunk of the tree starting at this root.

satpy.plugin_base module

The satpy.plugin_base module defines the plugin API.

class satpy.plugin_base.Plugin(ppp_config_dir=None, default_config_filename=None, config_files=None, **kwargs)[source]

Bases: object

Base plugin class for all dynamically loaded and configured objects.

Load configuration files related to this plugin.

This initializes a self.config dictionary that can be used to customize the subclass.

Parameters
  • ppp_config_dir (str) – Base “etc” directory for all configuration files.

  • default_config_filename (str) – Configuration filename to use if no other files have been specified with config_files.

  • config_files (list or str) – Configuration files to load instead of those automatically found in ppp_config_dir and other default configuration locations.

  • kwargs (dict) – Unused keyword arguments.

load_yaml_config(conf)[source]

Load a YAML configuration file and recursively update the overall configuration.

satpy.resample module

Satpy resampling module.

Satpy provides multiple resampling algorithms for resampling geolocated data to uniform projected grids. The easiest way to perform resampling in Satpy is through the Scene object’s resample() method. Additional utility functions are also available to assist in resampling data. Below is more information on resampling with Satpy as well as links to the relevant API documentation for available keyword arguments.

Resampling algorithms

Available Resampling Algorithms

Resampler

Description

Related

nearest

Nearest Neighbor

KDTreeResampler

ewa

Elliptical Weighted Averaging

EWAResampler

native

Native

NativeResampler

bilinear

Bilinear

BilinearResampler

bucket_avg

Average Bucket Resampling

BucketAvg

bucket_sum

Sum Bucket Resampling

BucketSum

bucket_count

Count Bucket Resampling

BucketCount

bucket_fraction

Fraction Bucket Resampling

BucketFraction

gradient_search

Gradient Search Resampling

GradientSearchResampler

The resampling algorithm used can be specified with the resampler keyword argument and defaults to nearest:

>>> scn = Scene(...)
>>> euro_scn = scn.resample('euro4', resampler='nearest')

Warning

Some resampling algorithms expect certain forms of data. For example, the EWA resampling expects polar-orbiting swath data and prefers if the data can be broken in to “scan lines”. See the API documentation for a specific algorithm for more information.

Resampling for comparison and composites

While all the resamplers can be used to put datasets of different resolutions on to a common area, the ‘native’ resampler is designed to match datasets to one resolution in the dataset’s original projection. This is extremely useful when generating composites between bands of different resolutions.

>>> new_scn = scn.resample(resampler='native')

By default this resamples to the highest resolution area (smallest footprint per pixel) shared between the loaded datasets. You can easily specify the lower resolution area:

>>> new_scn = scn.resample(scn.min_area(), resampler='native')

Providing an area that is neither the minimum or maximum resolution area may work, but behavior is currently undefined.

Caching for geostationary data

Satpy will do its best to reuse calculations performed to resample datasets, but it can only do this for the current processing and will lose this information when the process/script ends. Some resampling algorithms, like nearest and bilinear, can benefit by caching intermediate data on disk in the directory specified by cache_dir and using it next time. This is most beneficial with geostationary satellite data where the locations of the source data and the target pixels don’t change over time.

>>> new_scn = scn.resample('euro4', cache_dir='/path/to/cache_dir')

See the documentation for specific algorithms to see availability and limitations of caching for that algorithm.

Create custom area definition

See pyresample.geometry.AreaDefinition for information on creating areas that can be passed to the resample method:

>>> from pyresample.geometry import AreaDefinition
>>> my_area = AreaDefinition(...)
>>> local_scene = scn.resample(my_area)

Create dynamic area definition

See pyresample.geometry.DynamicAreaDefinition for more information.

Examples coming soon…

Store area definitions

Area definitions can be added to a custom YAML file (see pyresample’s documentation for more information) and loaded using pyresample’s utility methods:

>>> from pyresample.utils import parse_area_file
>>> my_area = parse_area_file('my_areas.yaml', 'my_area')[0]

Examples coming soon…

class satpy.resample.BaseResampler(source_geo_def, target_geo_def)[source]

Bases: object

Base abstract resampler class.

Initialize resampler with geolocation information.

Parameters
  • source_geo_def (SwathDefinition, AreaDefinition) – Geolocation definition for the data to be resampled

  • target_geo_def (CoordinateDefinition, AreaDefinition) – Geolocation definition for the area to resample data to.

compute(data, **kwargs)[source]

Do the actual resampling.

This must be implemented by subclasses.

get_hash(source_geo_def=None, target_geo_def=None, **kwargs)[source]

Get hash for the current resample with the given kwargs.

precompute(**kwargs)[source]

Do the precomputation.

This is an optional step if the subclass wants to implement more complex features like caching or can share some calculations between multiple datasets to be processed.

resample(data, cache_dir=None, mask_area=None, **kwargs)[source]

Resample data by calling precompute and compute methods.

Only certain resampling classes may use cache_dir and the mask provided when mask_area is True. The return value of calling the precompute method is passed as the cache_id keyword argument of the compute method, but may not be used directly for caching. It is up to the individual resampler subclasses to determine how this is used.

Parameters
  • data (xarray.DataArray) – Data to be resampled

  • cache_dir (str) – directory to cache precomputed results (default False, optional)

  • mask_area (bool) – Mask geolocation data where data values are invalid. This should be used when data values may affect what neighbors are considered valid.

Returns (xarray.DataArray): Data resampled to the target area

class satpy.resample.BilinearResampler(source_geo_def, target_geo_def)[source]

Bases: satpy.resample.BaseResampler

Resample using bilinear interpolation.

This resampler implements on-disk caching when the cache_dir argument is provided to the resample method. This should provide significant performance improvements on consecutive resampling of geostationary data.

Parameters
  • cache_dir (str) – Long term storage directory for intermediate results.

  • radius_of_influence (float) – Search radius cut off distance in meters

  • epsilon (float) – Allowed uncertainty in meters. Increasing uncertainty reduces execution time.

  • reduce_data (bool) – Reduce the input data to (roughly) match the target area.

Init BilinearResampler.

compute(data, fill_value=None, **kwargs)[source]

Resample the given data using bilinear interpolation.

load_bil_info(cache_dir, **kwargs)[source]

Load bilinear resampling info from cache directory.

precompute(mask=None, radius_of_influence=50000, epsilon=0, reduce_data=True, cache_dir=False, **kwargs)[source]

Create bilinear coefficients and store them for later use.

save_bil_info(cache_dir, **kwargs)[source]

Save bilinear resampling info to cache directory.

class satpy.resample.BucketAvg(source_geo_def, target_geo_def)[source]

Bases: satpy.resample.BucketResamplerBase

Class for averaging bucket resampling.

Bucket resampling calculates the average of all the values that are closest to each bin and inside the target area.

Parameters
  • fill_value (float (default: np.nan)) – Fill value for missing data

  • mask_all_nans (boolean (default: False)) – Mask all locations with all-NaN values

Initialize bucket resampler.

compute(data, fill_value=nan, mask_all_nan=False, **kwargs)[source]

Call the resampling.

class satpy.resample.BucketCount(source_geo_def, target_geo_def)[source]

Bases: satpy.resample.BucketResamplerBase

Class for bucket resampling which implements hit-counting.

This resampler calculates the number of occurences of the input data closest to each bin and inside the target area.

Initialize bucket resampler.

compute(data, **kwargs)[source]

Call the resampling.

class satpy.resample.BucketFraction(source_geo_def, target_geo_def)[source]

Bases: satpy.resample.BucketResamplerBase

Class for bucket resampling to compute category fractions.

This resampler calculates the fraction of occurences of the input data per category.

Initialize bucket resampler.

compute(data, fill_value=nan, categories=None, **kwargs)[source]

Call the resampling.

class satpy.resample.BucketResamplerBase(source_geo_def, target_geo_def)[source]

Bases: satpy.resample.BaseResampler

Base class for bucket resampling which implements averaging.

Initialize bucket resampler.

compute(data, **kwargs)[source]

Call the resampling.

precompute(**kwargs)[source]

Create X and Y indices and store them for later use.

resample(data, **kwargs)[source]

Resample data by calling precompute and compute methods.

Parameters

data (xarray.DataArray) – Data to be resampled

Returns (xarray.DataArray): Data resampled to the target area

class satpy.resample.BucketSum(source_geo_def, target_geo_def)[source]

Bases: satpy.resample.BucketResamplerBase

Class for bucket resampling which implements accumulation (sum).

This resampler calculates the cumulative sum of all the values that are closest to each bin and inside the target area.

Parameters
  • fill_value (float (default: np.nan)) – Fill value for missing data

  • mask_all_nans (boolean (default: False)) – Mask all locations with all-NaN values

Initialize bucket resampler.

compute(data, mask_all_nan=False, **kwargs)[source]

Call the resampling.

class satpy.resample.EWAResampler(source_geo_def, target_geo_def)[source]

Bases: satpy.resample.BaseResampler

Resample using an elliptical weighted averaging algorithm.

This algorithm does not use caching or any externally provided data mask (unlike the ‘nearest’ resampler).

This algorithm works under the assumption that the data is observed one scan line at a time. However, good results can still be achieved for non-scan based data provided rows_per_scan is set to the number of rows in the entire swath or by setting it to None.

Parameters
  • rows_per_scan (int, None) – Number of data rows for every observed scanline. If None then the entire swath is treated as one large scanline.

  • weight_count (int) – number of elements to create in the gaussian weight table. Default is 10000. Must be at least 2

  • weight_min (float) – the minimum value to store in the last position of the weight table. Default is 0.01, which, with a weight_distance_max of 1.0 produces a weight of 0.01 at a grid cell distance of 1.0. Must be greater than 0.

  • weight_distance_max (float) – distance in grid cell units at which to apply a weight of weight_min. Default is 1.0. Must be greater than 0.

  • weight_delta_max (float) – maximum distance in grid cells in each grid dimension over which to distribute a single swath cell. Default is 10.0.

  • weight_sum_min (float) – minimum weight sum value. Cells whose weight sums are less than weight_sum_min are set to the grid fill value. Default is EPSILON.

  • maximum_weight_mode (bool) – If False (default), a weighted average of all swath cells that map to a particular grid cell is used. If True, the swath cell having the maximum weight of all swath cells that map to a particular grid cell is used. This option should be used for coded/category data, i.e. snow cover.

Init EWAResampler.

compute(data, cache_id=None, fill_value=0, weight_count=10000, weight_min=0.01, weight_distance_max=1.0, weight_delta_max=1.0, weight_sum_min=- 1.0, maximum_weight_mode=False, grid_coverage=0, **kwargs)[source]

Resample the data according to the precomputed X/Y coordinates.

precompute(cache_dir=None, swath_usage=0, **kwargs)[source]

Generate row and column arrays and store it for later use.

resample(*args, **kwargs)[source]

Run precompute and compute methods.

Note

This sets the default of ‘mask_area’ to False since it is not needed in EWA resampling currently.

class satpy.resample.KDTreeResampler(source_geo_def, target_geo_def)[source]

Bases: satpy.resample.BaseResampler

Resample using a KDTree-based nearest neighbor algorithm.

This resampler implements on-disk caching when the cache_dir argument is provided to the resample method. This should provide significant performance improvements on consecutive resampling of geostationary data. It is not recommended to provide cache_dir when the mask keyword argument is provided to precompute which occurs by default for SwathDefinition source areas.

Parameters
  • cache_dir (str) – Long term storage directory for intermediate results.

  • mask (bool) – Force resampled data’s invalid pixel mask to be used when searching for nearest neighbor pixels. By default this is True for SwathDefinition source areas and False for all other area definition types.

  • radius_of_influence (float) – Search radius cut off distance in meters

  • epsilon (float) – Allowed uncertainty in meters. Increasing uncertainty reduces execution time.

Init KDTreeResampler.

compute(data, weight_funcs=None, fill_value=nan, with_uncert=False, **kwargs)[source]

Resample data.

load_neighbour_info(cache_dir, mask=None, **kwargs)[source]

Read index arrays from either the in-memory or disk cache.

precompute(mask=None, radius_of_influence=None, epsilon=0, cache_dir=None, **kwargs)[source]

Create a KDTree structure and store it for later use.

Note: The mask keyword should be provided if geolocation may be valid where data points are invalid.

save_neighbour_info(cache_dir, mask=None, **kwargs)[source]

Cache resampler’s index arrays if there is a cache dir.

class satpy.resample.NativeResampler(source_geo_def, target_geo_def)[source]

Bases: satpy.resample.BaseResampler

Expand or reduce input datasets to be the same shape.

If data is higher resolution (more pixels) than the destination area then data is averaged to match the destination resolution.

If data is lower resolution (less pixels) than the destination area then data is repeated to match the destination resolution.

This resampler does not perform any caching or masking due to the simplicity of the operations.

Initialize resampler with geolocation information.

Parameters
  • source_geo_def (SwathDefinition, AreaDefinition) – Geolocation definition for the data to be resampled

  • target_geo_def (CoordinateDefinition, AreaDefinition) – Geolocation definition for the area to resample data to.

static aggregate(d, y_size, x_size)[source]

Average every 4 elements (2x2) in a 2D array.

compute(data, expand=True, **kwargs)[source]

Resample data with NativeResampler.

classmethod expand_reduce(d_arr, repeats)[source]

Expand reduce.

resample(data, cache_dir=None, mask_area=False, **kwargs)[source]

Run NativeResampler.

satpy.resample.add_crs_xy_coords(data_arr, area)[source]

Add pyproj.crs.CRS and x/y or lons/lats to coordinates.

For SwathDefinition or GridDefinition areas this will add a crs coordinate and coordinates for the 2D arrays of lons and lats.

For AreaDefinition areas this will add a crs coordinate and the 1-dimensional x and y coordinate variables.

Parameters
satpy.resample.add_xy_coords(data_arr, area, crs=None)[source]

Assign x/y coordinates to DataArray from provided area.

If ‘x’ and ‘y’ coordinates already exist then they will not be added.

Parameters

Returns (xarray.DataArray): Updated DataArray object

satpy.resample.get_area_def(area_name)[source]

Get the definition of area_name from file.

The file is defined to use is to be placed in the $PPP_CONFIG_DIR directory, and its name is defined in satpy’s configuration file.

satpy.resample.get_area_file()[source]

Find area file(s) to use.

The files are to be named areas.yaml or areas.def.

satpy.resample.get_fill_value(dataset)[source]

Get the fill value of the dataset, defaulting to np.nan.

satpy.resample.hash_dict(the_dict, the_hash=None)[source]

Calculate a hash for a dictionary.

satpy.resample.prepare_resampler(source_area, destination_area, resampler=None, **resample_kwargs)[source]

Instantiate and return a resampler.

satpy.resample.resample(source_area, data, destination_area, resampler=None, **kwargs)[source]

Do the resampling.

satpy.resample.resample_dataset(dataset, destination_area, **kwargs)[source]

Resample dataset and return the resampled version.

Parameters
  • dataset (xarray.DataArray) – Data to be resampled.

  • destination_area – The destination onto which to project the data, either a full blown area definition or a string corresponding to the name of the area as defined in the area file.

  • **kwargs – The extra parameters to pass to the resampler objects.

Returns

A resampled DataArray with updated .attrs["area"] field. The dtype of the array is preserved.

satpy.resample.update_resampled_coords(old_data, new_data, new_area)[source]

Add coordinate information to newly resampled DataArray.

Parameters

satpy.scene module

Scene object to hold satellite data.

exception satpy.scene.DelayedGeneration[source]

Bases: KeyError

Mark that a dataset can’t be generated without further modification.

class satpy.scene.Scene(filenames=None, reader=None, filter_parameters=None, reader_kwargs=None, ppp_config_dir=None, base_dir=None, sensor=None, start_time=None, end_time=None, area=None)[source]

Bases: object

The Almighty Scene Class.

Example usage:

from satpy import Scene
from glob import glob

# create readers and open files
scn = Scene(filenames=glob('/path/to/files/*'), reader='viirs_sdr')

# load datasets from input files
scn.load(['I01', 'I02'])

# resample from satellite native geolocation to builtin 'eurol' Area
new_scn = scn.resample('eurol')

# save all resampled datasets to geotiff files in the current directory
new_scn.save_datasets()

Initialize Scene with Reader and Compositor objects.

To load data filenames and preferably reader must be specified. If filenames is provided without reader then the available readers will be searched for a Reader that can support the provided files. This can take a considerable amount of time so it is recommended that reader always be provided. Note without filenames the Scene is created with no Readers available requiring Datasets to be added manually:

scn = Scene()
scn['my_dataset'] = Dataset(my_data_array, **my_info)
Parameters
  • filenames (iterable or dict) – A sequence of files that will be used to load data from. A dict object should map reader names to a list of filenames for that reader.

  • reader (str or list) – The name of the reader to use for loading the data or a list of names.

  • filter_parameters (dict) – Specify loaded file filtering parameters. Shortcut for reader_kwargs[‘filter_parameters’].

  • reader_kwargs (dict) – Keyword arguments to pass to specific reader instances. Either a single dictionary that will be passed onto to all reader instances, or a dictionary mapping reader names to sub-dictionaries to pass different arguments to different reader instances.

  • ppp_config_dir (str) – The directory containing the configuration files for satpy.

  • base_dir (str) – (DEPRECATED) The directory to search for files containing the data to load. If filenames is also provided, this is ignored.

  • sensor (list or str) – (DEPRECATED: Use find_files_and_readers function) Limit used files by provided sensors.

  • area (AreaDefinition) – (DEPRECATED: Use filter_parameters) Limit used files by geographic area.

  • start_time (datetime) – (DEPRECATED: Use filter_parameters) Limit used files by starting time.

  • end_time (datetime) – (DEPRECATED: Use filter_parameters) Limit used files by ending time.

aggregate(dataset_ids=None, boundary='exact', side='left', func='mean', **dim_kwargs)[source]

Create an aggregated version of the Scene.

Parameters
  • dataset_ids (iterable) – DataIDs to include in the returned Scene. Defaults to all datasets.

  • func (string) – Function to apply on each aggregation window. One of ‘mean’, ‘sum’, ‘min’, ‘max’, ‘median’, ‘argmin’, ‘argmax’, ‘prod’, ‘std’, ‘var’. ‘mean’ is the default.

  • boundary – Not implemented.

  • side – Not implemented.

  • dim_kwargs – the size of the windows to aggregate.

Returns

A new aggregated scene

See also

xarray.DataArray.coarsen

Example

scn.aggregate(func=’min’, x=2, y=2) will aggregate 2x2 pixels by applying the min function.

all_composite_ids()[source]

Get all IDs for configured composites.

all_composite_names()[source]

Get all names for all configured composites.

all_dataset_ids(reader_name=None, composites=False)[source]

Get names of all datasets from loaded readers or reader_name if specified.

Returns: list of all dataset names

all_dataset_names(reader_name=None, composites=False)[source]

Get all known dataset names configured for the loaded readers.

Note that some readers dynamically determine what datasets are known by reading the contents of the files they are provided. This means that the list of datasets returned by this method may change depending on what files are provided even if a product/dataset is a “standard” product for a particular reader.

all_modifier_names()[source]

Get names of configured modifier objects.

property all_same_area

All contained data arrays are on the same area.

property all_same_proj

All contained data array are in the same projection.

available_composite_ids()[source]

Get names of composites that can be generated from the available datasets.

available_composite_names()[source]

All configured composites known to this Scene.

available_dataset_ids(reader_name=None, composites=False)[source]

Get DataIDs of loadable datasets.

This can be for all readers loaded by this Scene or just for reader_name if specified.

Available dataset names are determined by what each individual reader can load. This is normally determined by what files are needed to load a dataset and what files have been provided to the scene/reader. Some readers dynamically determine what is available based on the contents of the files provided.

Returns: list of available dataset names

available_dataset_names(reader_name=None, composites=False)[source]

Get the list of the names of the available datasets.

copy(datasets=None)[source]

Create a copy of the Scene including dependency information.

Parameters

datasets (list, tuple) – DataID objects for the datasets to include in the new Scene object.

crop(area=None, ll_bbox=None, xy_bbox=None, dataset_ids=None)[source]

Crop Scene to a specific Area boundary or bounding box.

Parameters
  • area (AreaDefinition) – Area to crop the current Scene to

  • ll_bbox (tuple, list) – 4-element tuple where values are in lon/lat degrees. Elements are (xmin, ymin, xmax, ymax) where X is longitude and Y is latitude.

  • xy_bbox (tuple, list) – Same as ll_bbox but elements are in projection units.

  • dataset_ids (iterable) – DataIDs to include in the returned Scene. Defaults to all datasets.

This method will attempt to intelligently slice the data to preserve relationships between datasets. For example, if we are cropping two DataArrays of 500m and 1000m pixel resolution then this method will assume that exactly 4 pixels of the 500m array cover the same geographic area as a single 1000m pixel. It handles these cases based on the shapes of the input arrays and adjusting slicing indexes accordingly. This method will have trouble handling cases where data arrays seem related but don’t cover the same geographic area or if the coarsest resolution data is not related to the other arrays which are related.

It can be useful to follow cropping with a call to the native resampler to resolve all datasets to the same resolution and compute any composites that could not be generated previously:

>>> cropped_scn = scn.crop(ll_bbox=(-105., 40., -95., 50.))
>>> remapped_scn = cropped_scn.resample(resampler='native')

Note

The resample method automatically crops input data before resampling to save time/memory.

property end_time

Return the end time of the file.

get(key, default=None)[source]

Return value from DatasetDict with optional default.

images()[source]

Generate images for all the datasets from the scene.

iter_by_area()[source]

Generate datasets grouped by Area.

Returns

generator of (area_obj, list of dataset objects)

keys(**kwargs)[source]

Get DataID keys for the underlying data container.

load(wishlist, calibration='*', resolution='*', polarization='*', level='*', generate=True, unload=True, **kwargs)[source]

Read and generate requested datasets.

When the wishlist contains DataQuery objects they can either be fully-specified DataQuery objects with every parameter specified or they can not provide certain parameters and the “best” parameter will be chosen. For example, if a dataset is available in multiple resolutions and no resolution is specified in the wishlist’s DataQuery then the highest (smallest number) resolution will be chosen.

Loaded DataArray objects are created and stored in the Scene object.

Parameters
  • wishlist (iterable) – List of names (str), wavelengths (float), DataQuery objects or DataID of the requested datasets to load. See available_dataset_ids() for what datasets are available.

  • calibration (list, str) – Calibration levels to limit available datasets. This is a shortcut to having to list each DataQuery/DataID in wishlist.

  • resolution (list | float) – Resolution to limit available datasets. This is a shortcut similar to calibration.

  • polarization (list | str) – Polarization (‘V’, ‘H’) to limit available datasets. This is a shortcut similar to calibration.

  • level (list | str) – Pressure level to limit available datasets. Pressure should be in hPa or mb. If an altitude is used it should be specified in inverse meters (1/m). The units of this parameter ultimately depend on the reader.

  • generate (bool) – Generate composites from the loaded datasets (default: True)

  • unload (bool) – Unload datasets that were required to generate the requested datasets (composite dependencies) but are no longer needed.

max_area(datasets=None)[source]

Get highest resolution area for the provided datasets.

Parameters

datasets (iterable) – Datasets whose areas will be compared. Can be either xarray.DataArray objects or identifiers to get the DataArrays from the current Scene. Defaults to all datasets.

min_area(datasets=None)[source]

Get lowest resolution area for the provided datasets.

Parameters

datasets (iterable) – Datasets whose areas will be compared. Can be either xarray.DataArray objects or identifiers to get the DataArrays from the current Scene. Defaults to all datasets.

property missing_datasets

Set of DataIDs that have not been successfully loaded.

resample(destination=None, datasets=None, generate=True, unload=True, resampler=None, reduce_data=True, **resample_kwargs)[source]

Resample datasets and return a new scene.

Parameters
  • destination (AreaDefinition, GridDefinition) – area definition to resample to. If not specified then the area returned by Scene.max_area() will be used.

  • datasets (list) – Limit datasets to resample to these specified data arrays. By default all currently loaded datasets are resampled.

  • generate (bool) – Generate any requested composites that could not be previously due to incompatible areas (default: True).

  • unload (bool) – Remove any datasets no longer needed after requested composites have been generated (default: True).

  • resampler (str) – Name of resampling method to use. By default, this is a nearest neighbor KDTree-based resampling (‘nearest’). Other possible values include ‘native’, ‘ewa’, etc. See the resample documentation for more information.

  • reduce_data (bool) – Reduce data by matching the input and output areas and slicing the data arrays (default: True)

  • resample_kwargs – Remaining keyword arguments to pass to individual resampler classes. See the individual resampler class documentation here for available arguments.

save_dataset(dataset_id, filename=None, writer=None, overlay=None, decorate=None, compute=True, **kwargs)[source]

Save the dataset_id to file using writer.

Parameters
  • dataset_id (str or Number or DataID or DataQuery) – Identifier for the dataset to save to disk.

  • filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.

  • writer (str) – Name of writer to use when writing data to disk. Default to "geotiff". If not provided, but filename is provided then the filename’s extension is used to determine the best writer to use.

  • overlay (dict) – See satpy.writers.add_overlay(). Only valid for “image” writers like geotiff or simple_image.

  • decorate (dict) – See satpy.writers.add_decorate(). Only valid for “image” writers like geotiff or simple_image.

  • compute (bool) – If True (default), compute all of the saves to disk. If False then the return value is either a Delayed object or two lists to be passed to a dask.array.store call. See return values below for more details.

  • kwargs – Additional writer arguments. See Writers for more information.

Returns

Value returned depends on compute. If compute is True then the return value is the result of computing a Delayed object or running dask.array.store(). If compute is False then the returned value is either a Delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed to dask.array.store(). If target is provided the the caller is responsible for calling target.close() if the target has this method.

save_datasets(writer=None, filename=None, datasets=None, compute=True, **kwargs)[source]

Save all the datasets present in a scene to disk using writer.

Parameters
  • writer (str) – Name of writer to use when writing data to disk. Default to "geotiff". If not provided, but filename is provided then the filename’s extension is used to determine the best writer to use.

  • filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.

  • datasets (iterable) – Limit written products to these datasets

  • compute (bool) – If True (default), compute all of the saves to disk. If False then the return value is either a Delayed object or two lists to be passed to a dask.array.store call. See return values below for more details.

  • kwargs – Additional writer arguments. See Writers for more information.

Returns

Value returned depends on compute keyword argument. If compute is True the value is the result of a either a dask.array.store operation or a Delayed compute, typically this is None. If compute is False then the result is either a Delayed object that can be computed with delayed.compute() or a two element tuple of sources and targets to be passed to dask.array.store(). If targets is provided then it is the caller’s responsibility to close any objects that have a “close” method.

show(dataset_id, overlay=None)[source]

Show the dataset on screen as an image.

Show dataset on screen as an image, possibly with an overlay.

Parameters
  • dataset_id (DataID, DataQuery or str) – Either a DataID, a DataQuery or a string, that refers to a data array that has been previously loaded using Scene.load.

  • overlay (dict, optional) – Add an overlay before showing the image. The keys/values for this dictionary are as the arguments for add_overlay(). The dictionary should contain at least the key "coast_dir", which should refer to a top-level directory containing shapefiles. See the pycoast package documentation for coastline shapefile installation instructions.

slice(key)[source]

Slice Scene by dataset index.

Note

DataArrays that do not have an area attribute will not be sliced.

property start_time

Return the start time of the file.

to_geoviews(gvtype=None, datasets=None, kdims=None, vdims=None, dynamic=False)[source]

Convert satpy Scene to geoviews.

Parameters
  • gvtype (gv plot type) – One of gv.Image, gv.LineContours, gv.FilledContours, gv.Points Default to geoviews.Image. See Geoviews documentation for details.

  • datasets (list) – Limit included products to these datasets

  • kdims (list of str) – Key dimensions. See geoviews documentation for more information.

  • vdims – list of str, optional Value dimensions. See geoviews documentation for more information. If not given defaults to first data variable

  • dynamic – boolean, optional, default False

Returns: geoviews object

to_xarray_dataset(datasets=None)[source]

Merge all xr.DataArrays of a scene to a xr.DataSet.

Parameters

datasets (list) – List of products to include in the xarray.Dataset

Returns: xarray.Dataset

unload(keepables=None)[source]

Unload all unneeded datasets.

Datasets are considered unneeded if they weren’t directly requested or added to the Scene by the user or they are no longer needed to generate composites that have yet to be generated.

Parameters

keepables (iterable) – DataIDs to keep whether they are needed or not.

values()[source]

Get values for the underlying data container.

property wishlist

Return a copy of the wishlist.

satpy.utils module

Module defining various utilities.

class satpy.utils.OrderedConfigParser(*args, **kwargs)[source]

Bases: object

Intercepts read and stores ordered section names.

Cannot use inheritance and super as ConfigParser use old style classes.

Initialize the instance.

read(filename)[source]

Read config file.

sections()[source]

Get sections from config file.

satpy.utils.angle2xyz(azi, zen)[source]

Convert azimuth and zenith to cartesian.

satpy.utils.atmospheric_path_length_correction(data, cos_zen, limit=88.0, max_sza=95.0)[source]

Perform Sun zenith angle correction.

This function uses the correction method proposed by Li and Shibata (2006): https://doi.org/10.1175/JAS3682.1

The correction is limited to limit degrees (default: 88.0 degrees). For larger zenith angles, the correction is the same as at the limit if max_sza is None. The default behavior is to gradually reduce the correction past limit degrees up to max_sza where the correction becomes 0. Both data and cos_zen should be 2D arrays of the same shape.

satpy.utils.debug_on()[source]

Turn debugging logging on.

satpy.utils.ensure_dir(filename)[source]

Check if the dir of f exists, otherwise create it.

satpy.utils.get_logger(name)[source]

Return logger with null handler added if needed.

satpy.utils.get_satpos(dataset)[source]

Get satellite position from dataset attributes.

Preferences are:

  • Longitude & Latitude: Nadir, actual, nominal, projection

  • Altitude: Actual, nominal, projection

A warning is issued when projection values have to be used because nothing else is available.

Returns

Geodetic longitude, latitude, altitude

satpy.utils.in_ipynb()[source]

Check if we are in a jupyter notebook.

satpy.utils.logging_off()[source]

Turn logging off.

satpy.utils.logging_on(level=30)[source]

Turn logging on.

satpy.utils.lonlat2xyz(lon, lat)[source]

Convert lon lat to cartesian.

satpy.utils.proj_units_to_meters(proj_str)[source]

Convert projection units from kilometers to meters.

satpy.utils.sunzen_corr_cos(data, cos_zen, limit=88.0, max_sza=95.0)[source]

Perform Sun zenith angle correction.

The correction is based on the provided cosine of the zenith angle (cos_zen). The correction is limited to limit degrees (default: 88.0 degrees). For larger zenith angles, the correction is the same as at the limit if max_sza is None. The default behavior is to gradually reduce the correction past limit degrees up to max_sza where the correction becomes 0. Both data and cos_zen should be 2D arrays of the same shape.

satpy.utils.trace_on()[source]

Turn trace logging on.

satpy.utils.xyz2angle(x, y, z, acos=False)[source]

Convert cartesian to azimuth and zenith.

satpy.utils.xyz2lonlat(x, y, z, asin=False)[source]

Convert cartesian to lon lat.

Module contents

Satpy Package initializer.