satpy.writers package¶
Submodules¶
satpy.writers.cf_writer module¶
Writer for netCDF4/CF.
Example usage¶
The CF writer saves datasets in a Scene as CF-compliant netCDF file. Here is an example with MSG SEVIRI data in HRIT format:
>>> from satpy import Scene
>>> import glob
>>> filenames = glob.glob('data/H*201903011200*')
>>> scn = Scene(filenames=filenames, reader='seviri_l1b_hrit')
>>> scn.load(['VIS006', 'IR_108'])
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108'], filename='seviri_test.nc',
exclude_attrs=['raw_metadata'])
You can select the netCDF backend using the
engine
keyword argument. Default ish5netcdf
.For datasets with area definition you can exclude lat/lon coordinates by setting
include_lonlats=False
.By default the dataset name is prepended to non-dimensional coordinates such as scanline timestamps. This ensures maximum consistency, i.e. the netCDF variable names are independent of the number/set of datasets to be written. If a non-dimensional coordinate is identical for
Grouping¶
All datasets to be saved must have the same projection coordinates x
and y
. If a scene holds datasets with
different grids, the CF compliant workaround is to save the datasets to separate files. Alternatively, you can save
datasets with common grids in separate netCDF groups as follows:
>>> scn.load(['VIS006', 'IR_108', 'HRV'])
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108', 'HRV'],
filename='seviri_test.nc', exclude_attrs=['raw_metadata'],
groups={'visir': ['VIS006', 'IR_108'], 'hrv': ['HRV']})
Note that the resulting file will not be fully CF compliant.
Attribute Encoding¶
In the above examples, raw metadata from the HRIT files have been excluded. If you want all attributes to be included,
just remove the exclude_attrs
keyword argument. By default, dict-type dataset attributes, such as the raw metadata,
are encoded as a string using json. Thus, you can use json to decode them afterwards:
>>> import xarray as xr
>>> import json
>>> # Save scene to nc-file
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108'], filename='seviri_test.nc')
>>> # Now read data from the nc-file
>>> ds = xr.open_dataset('seviri_test.nc')
>>> raw_mda = json.loads(ds['IR_108'].attrs['raw_metadata'])
>>> print(raw_mda['RadiometricProcessing']['Level15ImageCalibration']['CalSlope'])
[0.020865 0.0278287 0.0232411 0.00365867 0.00831811 0.03862197
0.12674432 0.10396091 0.20503568 0.22231115 0.1576069 0.0352385]
Alternatively it is possible to flatten dict-type attributes by setting flatten_attrs=True
. This is more human
readable as it will create a separate nc-attribute for each item in every dictionary. Keys are concatenated with
underscore separators. The CalSlope attribute can then be accessed as follows:
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108'], filename='seviri_test.nc',
flatten_attrs=True)
>>> ds = xr.open_dataset('seviri_test.nc')
>>> print(ds['IR_108'].attrs['raw_metadata_RadiometricProcessing_Level15ImageCalibration_CalSlope'])
[0.020865 0.0278287 0.0232411 0.00365867 0.00831811 0.03862197
0.12674432 0.10396091 0.20503568 0.22231115 0.1576069 0.0352385]
This is what the corresponding ncdump
output would look like in this case:
$ ncdump -h test_seviri.nc
...
IR_108:raw_metadata_RadiometricProcessing_Level15ImageCalibration_CalOffset = -1.064, ...;
IR_108:raw_metadata_RadiometricProcessing_Level15ImageCalibration_CalSlope = 0.021, ...;
IR_108:raw_metadata_RadiometricProcessing_MPEFCalFeedback_AbsCalCoeff = 0.021, ...;
...
-
class
satpy.writers.cf_writer.
AttributeEncoder
(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]¶ Bases:
json.encoder.JSONEncoder
JSON encoder for dataset attributes.
Constructor for JSONEncoder, with sensible defaults.
If skipkeys is false, then it is a TypeError to attempt encoding of keys that are not str, int, float or None. If skipkeys is True, such items are simply skipped.
If ensure_ascii is true, the output is guaranteed to be str objects with all incoming non-ASCII characters escaped. If ensure_ascii is false, the output can contain non-ASCII characters.
If check_circular is true, then lists, dicts, and custom encoded objects will be checked for circular references during encoding to prevent an infinite recursion (which would cause an OverflowError). Otherwise, no such check takes place.
If allow_nan is true, then NaN, Infinity, and -Infinity will be encoded as such. This behavior is not JSON specification compliant, but is consistent with most JavaScript based encoders and decoders. Otherwise, it will be a ValueError to encode such floats.
If sort_keys is true, then the output of dictionaries will be sorted by key; this is useful for regression tests to ensure that JSON serializations can be compared on a day-to-day basis.
If indent is a non-negative integer, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0 will only insert newlines. None is the most compact representation.
If specified, separators should be an (item_separator, key_separator) tuple. The default is (‘, ‘, ‘: ‘) if indent is
None
and (‘,’, ‘: ‘) otherwise. To get the most compact JSON representation, you should specify (‘,’, ‘:’) to eliminate whitespace.If specified, default is a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a
TypeError
.
-
class
satpy.writers.cf_writer.
CFWriter
(name=None, filename=None, base_dir=None, **kwargs)[source]¶ Bases:
satpy.writers.Writer
Writer producing NetCDF/CF compatible datasets.
Initialize the writer object.
- Parameters
name (str) – A name for this writer for log and error messages. If this writer is configured in a YAML file its name should match the name of the YAML file. Writer names may also appear in output file attributes.
filename (str) –
Filename to save data to. This filename can and should specify certain python string formatting fields to differentiate between data written to the files. Any attributes provided by the
.attrs
of a DataArray object may be included. Format and conversion specifiers provided by thetrollsift
package may also be used. Any directories in the provided pattern will be created if they do not exist. Example:{platform_name}_{sensor}_{name}_{start_time:%Y%m%d_%H%M%S}.tif
base_dir (str) – Base destination directories for all created files.
kwargs (dict) – Additional keyword arguments to pass to the
Plugin
class.
-
static
da2cf
(dataarray, epoch='seconds since 1970-01-01 00:00:00', flatten_attrs=False, exclude_attrs=None, compression=None)[source]¶ Convert the dataarray to something cf-compatible.
-
save_dataset
(dataset, filename=None, fill_value=None, **kwargs)[source]¶ Save the dataset to a given filename.
-
save_datasets
(datasets, filename=None, groups=None, header_attrs=None, engine=None, epoch='seconds since 1970-01-01 00:00:00', flatten_attrs=False, exclude_attrs=None, include_lonlats=True, pretty=False, compression=None, **to_netcdf_kwargs)[source]¶ Save the given datasets in one netCDF file.
Note that all datasets (if grouping: in one group) must have the same projection coordinates.
- Parameters
datasets (list) – Datasets to be saved
filename (str) – Output file
groups (dict) – Group datasets according to the given assignment: {‘group_name’: [‘dataset1’, ‘dataset2’, …]}. Group name None corresponds to the root of the file, i.e. no group will be created. Warning: The results will not be fully CF compliant!
header_attrs – Global attributes to be included
engine (str) – Module to be used for writing netCDF files. Follows xarray’s
to_netcdf()
engine choices with a preference for ‘netcdf4’.epoch (str) – Reference time for encoding of time coordinates
flatten_attrs (bool) – If True, flatten dict-type attributes
exclude_attrs (list) – List of dataset attributes to be excluded
include_lonlats (bool) – Always include latitude and longitude coordinates, even for datasets with area definition
pretty (bool) – Don’t modify coordinate names, if possible. Makes the file prettier, but possibly less consistent.
compression (dict) – Compression to use on the datasets before saving, for example {‘zlib’: True, ‘complevel’: 9}. This is in turn passed the xarray’s to_netcdf method: http://xarray.pydata.org/en/stable/generated/xarray.Dataset.to_netcdf.html for more possibilities.
-
satpy.writers.cf_writer.
area2cf
(dataarray, strict=False, got_lonlats=False)[source]¶ Convert an area to at CF grid mapping or lon and lats.
-
satpy.writers.cf_writer.
area2lonlat
(dataarray)[source]¶ Convert an area to longitudes and latitudes.
-
satpy.writers.cf_writer.
assert_xy_unique
(datas)[source]¶ Check that all datasets share the same projection coordinates x/y.
-
satpy.writers.cf_writer.
create_grid_mapping
(area)[source]¶ Create the grid mapping instance for area.
-
satpy.writers.cf_writer.
dataset_is_projection_coords
(dataset)[source]¶ Check if dataset is a projection coords.
-
satpy.writers.cf_writer.
encode_attrs_nc
(attrs)[source]¶ Encode dataset attributes in a netcdf compatible datatype.
-
satpy.writers.cf_writer.
encode_nc
(obj)[source]¶ Encode the given object as a netcdf compatible datatype.
Try to find the datatype which most closely resembles the object’s nature. If that fails, encode as a string. Plain lists are encoded recursively.
-
satpy.writers.cf_writer.
get_extra_ds
(dataset, keys=None)[source]¶ Get the extra datasets associated to dataset.
-
satpy.writers.cf_writer.
has_projection_coords
(ds_collection)[source]¶ Check if collection has a projection coords among data arrays.
-
satpy.writers.cf_writer.
link_coords
(datas)[source]¶ Link datasets and coordinates.
If the coordinates attribute of a data array links to other datasets in the scene, for example coordinates=’lon lat’, add them as coordinates to the data array and drop that attribute. In the final call to xr.Dataset.to_netcdf() all coordinate relations will be resolved and the coordinates attributes be set automatically.
-
satpy.writers.cf_writer.
make_alt_coords_unique
(datas, pretty=False)[source]¶ Make non-dimensional coordinates unique among all datasets.
Non-dimensional (or alternative) coordinates, such as scanline timestamps, may occur in multiple datasets with the same name and dimension but different values. In order to avoid conflicts, prepend the dataset name to the coordinate name. If a non-dimensional coordinate is unique among all datasets and
pretty=True
, its name will not be modified.Since all datasets must have the same projection coordinates, this is not applied to latitude and longitude.
satpy.writers.geotiff module¶
GeoTIFF writer objects for creating GeoTIFF files from DataArray objects.
-
class
satpy.writers.geotiff.
GeoTIFFWriter
(dtype=None, tags=None, **kwargs)[source]¶ Bases:
satpy.writers.ImageWriter
Writer to save GeoTIFF images.
Basic example from Scene:
>>> scn.save_datasets(writer='geotiff')
Un-enhanced float geotiff with NaN for fill values:
>>> scn.save_datasets(writer='geotiff', dtype=np.float32, enhance=False)
To add custom metadata use tags:
>>> scn.save_dataset(dataset_name, writer='geotiff', ... tags={'offset': 291.8, 'scale': -0.35})
For performance tips on creating geotiffs quickly and making them smaller see the Frequently Asked Questions.
Init the writer.
-
GDAL_OPTIONS
= ('tfw', 'rpb', 'rpctxt', 'interleave', 'tiled', 'blockxsize', 'blockysize', 'nbits', 'compress', 'num_threads', 'predictor', 'discard_lsb', 'sparse_ok', 'jpeg_quality', 'jpegtablesmode', 'zlevel', 'photometric', 'alpha', 'profile', 'bigtiff', 'pixeltype', 'copy_src_overviews')¶
-
save_image
(img, filename=None, dtype=None, fill_value=None, compute=True, keep_palette=False, cmap=None, tags=None, overviews=None, overviews_minsize=256, overviews_resampling=None, include_scale_offset=False, **kwargs)[source]¶ Save the image to the given
filename
in geotiff format.Note for faster output and reduced memory usage the
rasterio
library must be installed. This writer currently falls back to usinggdal
directly, but that will be deprecated in the future.- Parameters
img (xarray.DataArray) – Data to save to geotiff.
filename (str) – Filename to save the image to. Defaults to
filename
passed during writer creation. Unlike the creationfilename
keyword argument, this filename does not get formatted with data attributes.dtype (numpy.dtype) – Numpy data type to save the image as. Defaults to 8-bit unsigned integer (
np.uint8
). If thedtype
argument is provided during writer creation then that will be used as the default.fill_value (int or float) – Value to use where data values are NaN/null. If this is specified in the writer configuration file that value will be used as the default.
compute (bool) – Compute dask arrays and save the image immediately. If
False
then the return value can be passed tocompute_writer_results()
to do the computation. This is useful when multiple images may share input calculations where dask can benefit from not repeating them multiple times. Defaults toTrue
in the writer by itself, but is typically passed asFalse
by callers where calculations can be combined.keep_palette (bool) – Save palette/color table to geotiff. To be used with images that were palettized with the “palettize” enhancement. Setting this to
True
will cause the colormap of the image to be written as a “color table” in the output geotiff and the image data values will represent the index values in to that color table. By default, this will use the colormap used in the “palettize” operation. See thecmap
option for other options. This option defaults toFalse
and palettized images will be converted to RGB/A.cmap (trollimage.colormap.Colormap or None) – Colormap to save as a color table in the output geotiff. See
keep_palette
for more information. Defaults to the palette of the providedimg
object. The colormap’s range should be set to match the index range of the palette (ex. cmap.set_range(0, len(colors))).tags (dict) – Extra metadata to store in geotiff.
overviews (list) –
The reduction factors of the overviews to include in the image, eg:
scn.save_datasets(overviews=[2, 4, 8, 16])
If provided as an empty list, then levels will be computed as powers of two until the last level has less pixels than overviews_minsize. Default is to not add overviews.
overviews_minsize (int) – Minimum number of pixels for the smallest overview size generated when overviews is auto-generated. Defaults to 256.
overviews_resampling (str) – Resampling method to use when generating overviews. This must be the name of an enum value from
rasterio.enums.Resampling
and only takes effect if the overviews keyword argument is provided. Common values include nearest (default), bilinear, average, and many others. See the rasterio documentation for more information.include_scale_offset (bool) – Activate inclusion of scale and offset factors in the geotiff to allow retrieving original values from the pixel values.
False
by default.
-
satpy.writers.mitiff module¶
MITIFF writer objects for creating MITIFF files from Dataset objects.
-
class
satpy.writers.mitiff.
MITIFFWriter
(name=None, tags=None, **kwargs)[source]¶ Bases:
satpy.writers.ImageWriter
Writer to produce MITIFF image files.
Initialize reader with tag and other configuration information.
-
save_dataset
(dataset, filename=None, fill_value=None, compute=True, **kwargs)[source]¶ Save single dataset as mitiff file.
-
satpy.writers.ninjotiff module¶
Writer for TIFF images compatible with the NinJo visualization tool (NinjoTIFFs).
NinjoTIFFs can be color images or monochromatic. For monochromatic images, the physical units and scale and offsets to retrieve the physical values are provided. Metadata is also recorded in the file.
In order to write ninjotiff files, some metadata needs to be provided to the writer. Here is an example on how to write a color image:
chn = "airmass"
ninjoRegion = load_area("areas.def", "nrEURO3km")
filenames = glob("data/*__")
global_scene = Scene(reader="hrit_msg", filenames=filenames)
global_scene.load([chn])
local_scene = global_scene.resample(ninjoRegion)
local_scene.save_dataset(chn, filename="airmass.tif", writer='ninjotiff',
sat_id=6300014,
chan_id=6500015,
data_cat='GPRN',
data_source='EUMCAST',
nbits=8)
Here is an example on how to write a color image:
chn = "IR_108"
ninjoRegion = load_area("areas.def", "nrEURO3km")
filenames = glob("data/*__")
global_scene = Scene(reader="hrit_msg", filenames=filenames)
global_scene.load([chn])
local_scene = global_scene.resample(ninjoRegion)
local_scene.save_dataset(chn, filename="msg.tif", writer='ninjotiff',
sat_id=6300014,
chan_id=900015,
data_cat='GORN',
data_source='EUMCAST',
physic_unit='K',
nbits=8)
The metadata to provide to the writer can also be stored in a configuration file (see pyninjotiff), so that the previous example can be rewritten as:
chn = "IR_108"
ninjoRegion = load_area("areas.def", "nrEURO3km")
filenames = glob("data/*__")
global_scene = Scene(reader="hrit_msg", filenames=filenames)
global_scene.load([chn])
local_scene = global_scene.resample(ninjoRegion)
local_scene.save_dataset(chn, filename="msg.tif", writer='ninjotiff',
# ninjo product name to look for in .cfg file
ninjo_product_name="IR_108",
# custom configuration file for ninjo tiff products
# if not specified PPP_CONFIG_DIR is used as config file directory
ninjo_product_file="/config_dir/ninjotiff_products.cfg")
-
class
satpy.writers.ninjotiff.
NinjoTIFFWriter
(tags=None, **kwargs)[source]¶ Bases:
satpy.writers.ImageWriter
Writer for NinjoTiff files.
Inititalize the writer.
satpy.writers.awips_tiled module¶
The AWIPS Tiled writer is used to create AWIPS-compatible tiled NetCDF4 files.
The Advanced Weather Interactive Processing System (AWIPS) is a program used by the United States National Weather Service (NWS) and others to view different forms of weather imagery. The original Sectorized Cloud and Moisture Imagery (SCMI) functionality in AWIPS was a NetCDF4 format supported by AWIPS to store one image broken up in to one or more “tiles”. This format has since been expanded to support many other products and so the writer for this format in Satpy is generically called the “AWIPS Tiled” writer. You may still see SCMI referenced in this documentation or in the source code for the writer. Once AWIPS is configured for specific products this writer can be used to provide compatible products to the system.
The AWIPS Tiled writer takes 2D (y, x) geolocated data and creates one or more AWIPS-compatible NetCDF4 files. The writer and the AWIPS client may need to be configured to make things appear the way the user wants in the AWIPS client. The writer can only produce files for datasets mapped to areas with specific projections:
lcc
geos
merc
stere
This is a limitation of the AWIPS client and not of the writer. In the case where AWIPS has been updated to support additional projections, this writer may also need to be updated to support those projections.
AWIPS Configuration¶
Depending on how this writer is used and the data it is provided, AWIPS may need additional configuration on the server side to properly ingest the files produced. This will require administrator privileges to the ingest server(s) and is not something that can be configured on the client. Note that any changes required must be done on all servers that you wish to ingest your data files. The generic “polar” template this writer defaults to should limit the number of modifications needed for any new data fields that AWIPS previously was unaware of. Once the data is ingested, the client can be used to customize how the data looks on screen.
AWIPS requires files to follow a specific naming scheme so they can be routed to specific “decoders”. For the files produced by this writer, this typically means editing the “goesr” decoder configuration in a directory like:
/awips2/edex/data/utility/common_static/site/<site>/distribution/goesr.xml
The “goesr” decoder is a subclass of the “satellite” decoder. You may see either name show up in the AWIPS ingest logs. With the correct regular expression in the above file, your files should be passed to the right decoder, opened, and parsed for data.
To tell AWIPS exactly what attributes and variables mean in your file, you’ll need to create or configure an XML file in:
/awips2/edex/data/utility/common_static/site/<site>/satellite/goesr/descriptions/
See the existing files in this directory for examples. The “polar” template (see below) that this writer uses by default is already configured in the “Polar” subdirectory assuming that the TOWR-S RPM package has been installed on your AWIPS ingest server.
Templates¶
This writer allows for a “template” to be specified to control how the output
files are structured and created. Templates can be configured in the writer
YAML file (awips_tiled.yaml
) or passed as a dictionary to the template
keyword argument. Templates have three main sections:
global_attributes
coordinates
variables
Additionally, you can specify whether a template should produce files with
one variable per file by specifying single_variable: true
or multiple
variables per file by specifying single_variable: false
. You can also
specify the output filename for a template using a Python format string.
See awips_tiled.yaml
for examples. Lastly, a add_sector_id_global
boolean parameter can be specified to add the user-provided sector_id
keyword argument as a global attribute to the file.
The global_attributes
section takes names of global attributes and
then a series of options to “render” that attribute from the metadata
provided when creating files. For example:
- product_name:
value: “{name}”
For more information see the
satpy.writers.awips_tiled.NetCDFTemplate.get_attr_value()
method.
The coordinates
and variables
are similar to each other in that they
define how a variable should be created, the attributes it should have, and
the encoding to write to the file. Coordinates typically don’t need to be
modified as tiled files usually have only x
and y
dimension variables.
The Variables on the other hand use a decision tree to determine what section
applies for a particular DataArray being saved. The basic structure is:
- variables:
- arbitrary_section_name:
<decision tree matching parameters> var_name: “output_netcdf_variable_name” attributes:
<attributes similar to global attributes>
- encoding:
<xarray encoding parameters>
The “decision tree matching parameters” can be one or more of “name”, “standard_name’, “satellite”, “sensor”, “area_id’, “units”, or “reader”. The writer will choose the best section for the DataArray being saved (the most matches). If none of these parameters are specified in a section then it will be used when no other matches are found (the “default” section).
The “encoding” parameters can be anything accepted by xarray’s to_netcdf
method. See xarray.Dataset.to_netcdf()
for more information on the
encoding` keyword argument.
For more examples see the existing builtin templates defined in
awips_tiled.yaml
.
Builtin Templates¶
There are only a few templates provided in Sapty currently.
polar: A custom format developed for the CSPP Polar2Grid project at the University of Wisconsin - Madison Space Science and Engineering Center (SSEC). This format is made available through the TOWR-S package that can be installed for GOES-R support in AWIPS. This format is meant to be very generic and should theoretically allow any variable to get ingested into AWIPS.
glm_l2_radc: This format is used to produce standard files for the gridded GLM products produced by the CSPP Geo Gridded GLM package. Support for this format is also available in the TOWR-S package on an AWIPS ingest server. This format is specific to gridded GLM on the CONUS sector and is not meant to work for other data.
glm_l2_radf: This format is used to produce standard files for the gridded GLM productes produced by the CSPP Geo Gridded GLM package. Support for this format is also available in the TOWR-S package on an AWIPS ingest server. This format is specific to gridded GLM on the Full Disk sector and is not meant to work for other data.
Numbered versus Lettered Grids¶
By default this writer will save tiles by number starting with ‘1’ representing the upper-left image tile. Tile numbers then increase along the column and then on to the next row.
By specifying lettered_grid as True tiles can be designated with a letter. Lettered grids or sectors are preconfigured in the awips_tiled.yaml configuration file. The lettered tile locations are static and will not change with the data being written to them. Each lettered tile is split into a certain number of subtiles (num_subtiles), default 2 rows by 2 columns. Lettered tiles are meant to make it easier for receiving AWIPS clients/stations to filter what tiles they receive; saving time, bandwidth, and space.
Any tiles (numbered or lettered) not containing any valid data are not created.
Updating tiles¶
There are some input data cases where we want to put new data in a tile file written by a previous execution. An example is a pre-tiled input dataset that is processed one tile at a time. One input tile may map to one or more output AWIPS tiles, but may not perfectly aligned, leaving empty/unused space in the output tile. The next input tile may be able to fill in that empty space and should be allowed to write the “new” data to the file. This is the default behavior of the AWIPS tiled writer. In cases where data overlaps the existing data in the tile, the newer data has priority.
Shifting Lettered Grids¶
Due to the static nature of the lettered grids, there is sometimes a need to shift the locations of where these tiles are by up to 0.5 pixels in each dimension to align with the data being processed. This means that the tiles for a 1000m resolution grid may be shifted up to 500m in each direction from the original definition of the lettered “sector”. This can cause differences in the location of the tiles between executions depending on the locations of the input data. In the worst case tile A01 from one execution could be shifted up to 1 grid cell from tile A01 in another execution (one is shifted 0.5 pixels to the left, the other is shifted 0.5 to the right).
This shifting makes the calculations for generating tiles easier and
more accurate. By default, the lettered tile locations are changed to match
the location of the data. This works well when output tiles will not be
updated (see above) in future processing. In cases where output tiles will be
filled in or updated with more data the use_sector_reference
keyword
argument can be set to True
to tell the writer to shift the data’s
geolocation by up to 0.5 pixels in each dimension instead of shifting the
lettered tile locations.
-
class
satpy.writers.awips_tiled.
AWIPSNetCDFTemplate
(template_dict, swap_end_time=False)[source]¶ Bases:
satpy.writers.awips_tiled.NetCDFTemplate
NetCDF template renderer specifically for tiled AWIPS files.
Handle AWIPS special cases and initialize template helpers.
-
apply_misc_metadata
(new_ds, sector_id=None, creator=None, creation_time=None)[source]¶ Add attributes that don’t fit into any other category.
-
apply_tile_coord_encoding
(new_ds, xy_factors)[source]¶ Add encoding information specific to the coordinate variables.
-
render
(dataset_or_data_arrays, area_def, tile_info, sector_id, creator=None, creation_time=None, shared_attrs=None, extra_global_attrs=None)[source]¶ Create a
xarray.Dataset
from template using information provided.
-
-
class
satpy.writers.awips_tiled.
AWIPSTiledVariableDecisionTree
(decision_dicts, **kwargs)[source]¶ Bases:
satpy.writers.DecisionTree
Load AWIPS-specific metadata from YAML configuration.
Initialize decision tree with specific keys to look for.
-
class
satpy.writers.awips_tiled.
AWIPSTiledWriter
(compress=False, fix_awips=False, **kwargs)[source]¶ Bases:
satpy.writers.Writer
Writer for AWIPS NetCDF4 Tile files.
See
satpy.writers.awips_tiled
documentation for more information on templates and produced file format.Initialize writer and decision trees.
-
property
enhancer
¶ Get lazy loaded enhancer object only if needed.
-
get_filename
(template, area_def, tile_info, sector_id, **kwargs)[source]¶ Generate output NetCDF file from metadata.
-
save_datasets
(datasets, sector_id=None, source_name=None, tile_count=(1, 1), tile_size=None, lettered_grid=False, num_subtiles=None, use_end_time=False, use_sector_reference=False, template='polar', check_categories=True, extra_global_attrs=None, compute=True, **kwargs)[source]¶ Write a series of DataArray objects to multiple NetCDF4 Tile files.
- Parameters
datasets (iterable) – Series of gridded
DataArray
objects with the necessary metadata to be converted to a valid tile product file.sector_id (str) – Name of the region or sector that the provided data is on. This name will be written to the NetCDF file and will be used as the sector in the AWIPS client for the ‘polar’ template. For lettered grids this name should match the name configured in the writer YAML. This is required for some templates (ex. default ‘polar’ template) but is defined as a keyword argument for better error handling in Satpy.
source_name (str) – Name of producer of these files (ex. “SSEC”). This name is used to create the output filename for some templates.
tile_count (tuple) – For numbered tiles only, how many tile rows and tile columns to produce. Default to
(1, 1)
, a single giant tile. Eithertile_count
,tile_size
, orlettered_grid
should be specified.tile_size (tuple) – For numbered tiles only, how many pixels each tile should be. This takes precedence over
tile_count
if specified. Eithertile_count
,tile_size
, orlettered_grid
should be specified.lettered_grid (bool) – Whether to use a preconfigured grid and label tiles with letters and numbers instead of only numbers. For example, tiles will be named “A01”, “A02”, “B01”, and so on in the first row of data and continue on to “A03”, “A04”, and “B03” in the default case where
num_subtiles
is (2, 2). Letters start in the upper-left corner and will go from A up to Z, if necessary.num_subtiles (tuple) – For lettered tiles only, how many rows and columns to split each lettered tile in to. By default 2 rows and 2 columns will be created. For example, the tile for letter “A” will have “A01” and “A02” in the top row and “A03” and “A04” in the second row.
use_end_time (bool) – Instead of using the
start_time
for the product filename and time written to the file, use theend_time
. This is useful for multi-day composites where theend_time
is a better representation of what data is in the file.use_sector_reference (bool) – For lettered tiles only, whether to shift the data locations to align with the preconfigured grid’s pixels. By default this is False meaning that the grid’s tiles will be shifted to align with the data locations. If True, the data is shifted. At most the data will be shifted by 0.5 pixels. See
satpy.writers.scmi
for more information.template (str or dict) – Name of the template configured in the writer YAML file. This can also be a dictionary with a full template configuration. See the
satpy.writers.scmi
documentation for more information on templates. Defaults to the ‘polar’ builtin template.check_categories (bool) – Whether category and flag products should be included in the checks for empty or not empty tiles. In some cases (ex. data quality flags) category products may look like all valid data (a non-empty tile) but shouldn’t be used to determine the emptiness of the overall tile (good quality versus non-existent). Default is True. Set to False to ignore category (integer dtype or “flag_meanings” defined) when checking for valid data.
extra_global_attrs (dict) – Additional global attributes to be added to every produced file. These attributes are applied at the end of template rendering and will therefore overwrite template generated values with the same global attribute name.
compute (bool) – Compute and write the output immediately using dask. Default to
False
.
-
property
-
class
satpy.writers.awips_tiled.
LetteredTileGenerator
(area_definition, extents, cell_size=(2000000, 2000000), num_subtiles=None, use_sector_reference=False)[source]¶ Bases:
satpy.writers.awips_tiled.NumberedTileGenerator
Helper class to generate per-tile metadata for lettered tiles.
Initialize tile information for later generation.
-
class
satpy.writers.awips_tiled.
NetCDFTemplate
(template_dict)[source]¶ Bases:
object
Helper class to convert a dictionary-based NetCDF template to an
xarray.Dataset
.Parse template dictionary and prepare for rendering.
-
get_attr_value
(attr_name, input_metadata, value=None, raw_key=None, raw_value=None, prefix='_')[source]¶ Determine attribute value using the provided configuration information.
If value and raw_key are not provided, this method will search for a method named
<prefix><attr_name>
, which will be called with one argument (input_metadata) to get the value to return. See the documentation for the prefix keyword argument below for more information.- Parameters
attr_name (str) – Name of the attribute whose value we are generating.
input_metadata (dict) – Dictionary of metadata from the input DataArray and other context information. Used to provide information to value or access data from using raw_key if provided.
value (Any) – Value to assign to this attribute. If a string, it may be a python format string which will be provided the data from input_metadata. For example,
{name}
will be filled with the value for the"name"
in input_metadata. It can also include environment variables (ex."${MY_ENV_VAR}"
) which will be expanded. String formatting is accomplished by the specialtrollsift.parser.StringFormatter
which allows for special common conversions.raw_key (str) – Key to access value from input_metadata, but without any string formatting applied to it. This allows for metadata of non-string types to be requested.
raw_value (Any) – Static hardcoded value to set this attribute to. Overrides all other options.
prefix (bool) – Prefix to use when value and raw_key are both
None
. Default is"_"
. This will be used to find custom attribute handlers in subclasses. For example, if value and raw_key are bothNone
and attr_name is"my_attr"
, then the methodself._my_attr
will be called asreturn self._my_attr(input_metadata)
. SeeNetCDFTemplate.render_global_attributes()
for additional information (prefix is"_global_"
).
-
render
(dataset_or_data_arrays, shared_attrs=None)[source]¶ Create
xarray.Dataset
from provided data.
-
-
class
satpy.writers.awips_tiled.
NumberedTileGenerator
(area_definition, tile_shape=None, tile_count=None)[source]¶ Bases:
object
Helper class to generate per-tile metadata for numbered tiles.
Initialize and generate tile information for this sector/grid for later use.
-
class
satpy.writers.awips_tiled.
TileInfo
(tile_count, image_shape, tile_shape, tile_row_offset, tile_column_offset, tile_id, tile_number, x, y, xy_factors, tile_slices, data_slices)¶ Bases:
tuple
Create new instance of TileInfo(tile_count, image_shape, tile_shape, tile_row_offset, tile_column_offset, tile_id, tile_number, x, y, xy_factors, tile_slices, data_slices)
-
property
data_slices
¶ Alias for field number 11
-
property
image_shape
¶ Alias for field number 1
-
property
tile_column_offset
¶ Alias for field number 4
-
property
tile_count
¶ Alias for field number 0
-
property
tile_id
¶ Alias for field number 5
-
property
tile_number
¶ Alias for field number 6
-
property
tile_row_offset
¶ Alias for field number 3
-
property
tile_shape
¶ Alias for field number 2
-
property
tile_slices
¶ Alias for field number 10
-
property
x
¶ Alias for field number 7
-
property
xy_factors
¶ Alias for field number 9
-
property
y
¶ Alias for field number 8
-
property
-
class
satpy.writers.awips_tiled.
XYFactors
(mx, bx, my, by)¶ Bases:
tuple
Create new instance of XYFactors(mx, bx, my, by)
-
property
bx
¶ Alias for field number 1
-
property
by
¶ Alias for field number 3
-
property
mx
¶ Alias for field number 0
-
property
my
¶ Alias for field number 2
-
property
-
satpy.writers.awips_tiled.
create_debug_lettered_tiles
(**writer_kwargs)[source]¶ Create tile files with tile identifiers “burned” in to the image data for debugging.
-
satpy.writers.awips_tiled.
draw_rectangle
(draw, coordinates, outline=None, fill=None, width=1)[source]¶ Draw simple rectangle in to a numpy array image.
-
satpy.writers.awips_tiled.
fix_awips_file
(fn)[source]¶ Hack the NetCDF4 files to workaround NetCDF-Java bugs used by AWIPS.
This should not be needed for new versions of AWIPS.
-
satpy.writers.awips_tiled.
tile_filler
(data_arr_data, tile_shape, tile_slices, fill_value)[source]¶ Create an empty tile array and fill the proper locations with data.
-
satpy.writers.awips_tiled.
to_nonempty_netcdf
(dataset_to_save, factors, output_filename, update_existing=True, check_categories=True, fix_awips=False)[source]¶ Save
xarray.Dataset
to a NetCDF file if not all fills.In addition to checking certain Dataset variables for fill values, this function can also “update” an existing NetCDF file with the new valid data provided.
satpy.writers.simple_image module¶
-
class
satpy.writers.simple_image.
PillowWriter
(**kwargs)[source]¶ Bases:
satpy.writers.ImageWriter
Initialize image writer object.
- Parameters
name (str) – A name for this writer for log and error messages. If this writer is configured in a YAML file its name should match the name of the YAML file. Writer names may also appear in output file attributes.
filename (str) –
Filename to save data to. This filename can and should specify certain python string formatting fields to differentiate between data written to the files. Any attributes provided by the
.attrs
of a DataArray object may be included. Format and conversion specifiers provided by thetrollsift
package may also be used. Any directories in the provided pattern will be created if they do not exist. Example:{platform_name}_{sensor}_{name}_{start_time:%Y%m%d_%H%M%S}.tif
base_dir (str) – Base destination directories for all created files.
enhance (bool or Enhancer) – Whether to automatically enhance data to be more visually useful and to fit inside the file format being saved to. By default this will default to using the enhancement configuration files found using the default
Enhancer
class. This can be set to False so that no enhancments are performed. This can also be an instance of theEnhancer
class if further custom enhancement is needed.enhancement_config (str) – Deprecated.
kwargs (dict) – Additional keyword arguments to pass to the
Writer
base class.
Changed in version 0.10: Deprecated enhancement_config_file and ‘enhancer’ in favor of enhance. Pass an instance of the Enhancer class to enhance instead.
-
save_image
(img, filename=None, compute=True, **kwargs)[source]¶ Save Image object to a given
filename
.- Parameters
img (trollimage.xrimage.XRImage) – Image object to save to disk.
filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.
compute (bool) – If True (default), compute and save the dataset. If False return either a dask.delayed.Delayed object or tuple of (source, target). See the return values below for more information.
**kwargs – Keyword arguments to pass to the images save method.
- Returns
Value returned depends on compute. If compute is True then the return value is the result of computing a dask.delayed.Delayed object or running dask.array.store. If compute is False then the returned value is either a dask.delayed.Delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed to dask.array.store. If target is provided the the caller is responsible for calling target.close() if the target has this method.
satpy.writers.utils module¶
Writer utilities.
Module contents¶
Shared objects of the various writer classes.
For now, this includes enhancement configuration utilities.
-
class
satpy.writers.
DecisionTree
(decision_dicts, match_keys, multival_keys=None)[source]¶ Bases:
object
Structure to search for nearest match from a set of parameters.
This class is used to find the best configuration section by matching a set of attributes. The provided dictionary contains a mapping of “section name” to “decision” dictionaries. Each decision dictionary contains the attributes that will be used for matching plus any additional keys that could be useful when matched. This class will search these decisions and return the one with the most matching parameters to the attributes passed to the
find_match`()
method.Note that decision sections are provided as a dict instead of a list so that they can be overwritten or updated by doing the equivalent of a
current_dicts.update(new_dicts)
.Examples
Decision sections are provided as a dictionary of dictionaries. The returned match will be the first result found by searching provided match_keys in order.
- decisions = {
- ‘first_section’: {
‘a’: 1, ‘b’: 2, ‘useful_key’: ‘useful_value’,
}, ‘second_section’: {
‘a’: 5, ‘useful_key’: ‘other_useful_value1’,
}, ‘third_section’: {
‘b’: 4, ‘useful_key’: ‘other_useful_value2’,
},
} tree = DecisionTree(decisions, (‘a’, ‘b’)) tree.find_match(a=5, b=2) # second_section dict tree.find_match(a=1, b=2) # first_section dict tree.find_match(a=5, b=4) # second_section dict tree.find_match(a=3, b=2) # no match
Init the decision tree.
- Parameters
decision_dicts (dict) – Dictionary of dictionaries. Each sub-dictionary contains key/value pairs that can be matched from the find_match method. Sub-dictionaries can include additional keys outside of the
match_keys
provided to act as the “result” of a query. The keys of the root dict are arbitrary.match_keys (list) – Keys of the provided dictionary to use for matching.
multival_keys (list) – Keys of match_keys that can be provided as multiple values. A multi-value key can be specified as a single value (typically a string) or a set. If a set, it will be sorted and converted to a tuple and then used for matching. When querying the tree, these keys will be searched for exact multi-value results (the sorted tuple) and if not found then each of the values will be searched individually in alphabetical order.
-
any_key
= None¶
-
class
satpy.writers.
EnhancementDecisionTree
(*decision_dicts, **kwargs)[source]¶ Bases:
satpy.writers.DecisionTree
The enhancement decision tree.
Init the decision tree.
-
class
satpy.writers.
Enhancer
(ppp_config_dir=None, enhancement_config_file=None)[source]¶ Bases:
object
Helper class to get enhancement information for images.
Initialize an Enhancer instance.
- Parameters
ppp_config_dir – Points to the base configuration directory
enhancement_config_file – The enhancement configuration to apply, False to leave as is.
-
class
satpy.writers.
ImageWriter
(name=None, filename=None, base_dir=None, enhance=None, enhancement_config=None, **kwargs)[source]¶ Bases:
satpy.writers.Writer
Base writer for image file formats.
Initialize image writer object.
- Parameters
name (str) – A name for this writer for log and error messages. If this writer is configured in a YAML file its name should match the name of the YAML file. Writer names may also appear in output file attributes.
filename (str) –
Filename to save data to. This filename can and should specify certain python string formatting fields to differentiate between data written to the files. Any attributes provided by the
.attrs
of a DataArray object may be included. Format and conversion specifiers provided by thetrollsift
package may also be used. Any directories in the provided pattern will be created if they do not exist. Example:{platform_name}_{sensor}_{name}_{start_time:%Y%m%d_%H%M%S}.tif
base_dir (str) – Base destination directories for all created files.
enhance (bool or Enhancer) – Whether to automatically enhance data to be more visually useful and to fit inside the file format being saved to. By default this will default to using the enhancement configuration files found using the default
Enhancer
class. This can be set to False so that no enhancments are performed. This can also be an instance of theEnhancer
class if further custom enhancement is needed.enhancement_config (str) – Deprecated.
kwargs (dict) – Additional keyword arguments to pass to the
Writer
base class.
Changed in version 0.10: Deprecated enhancement_config_file and ‘enhancer’ in favor of enhance. Pass an instance of the Enhancer class to enhance instead.
-
save_dataset
(dataset, filename=None, fill_value=None, overlay=None, decorate=None, compute=True, **kwargs)[source]¶ Save the
dataset
to a givenfilename
.This method creates an enhanced image using
get_enhanced_image()
. The image is then passed tosave_image()
. See both of these functions for more details on the arguments passed to this method.
-
save_image
(img, filename=None, compute=True, **kwargs)[source]¶ Save Image object to a given
filename
.- Parameters
img (trollimage.xrimage.XRImage) – Image object to save to disk.
filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.
compute (bool) – If True (default), compute and save the dataset. If False return either a Delayed object or tuple of (source, target). See the return values below for more information.
**kwargs – Other keyword arguments to pass to this writer.
- Returns
Value returned depends on compute. If compute is True then the return value is the result of computing a Delayed object or running
dask.array.store()
. If compute is False then the returned value is either a Delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed todask.array.store()
. If target is provided the the caller is responsible for calling target.close() if the target has this method.
-
class
satpy.writers.
Writer
(name=None, filename=None, base_dir=None, **kwargs)[source]¶ Bases:
satpy.plugin_base.Plugin
Base Writer class for all other writers.
A minimal writer subclass should implement the save_dataset method.
Initialize the writer object.
- Parameters
name (str) – A name for this writer for log and error messages. If this writer is configured in a YAML file its name should match the name of the YAML file. Writer names may also appear in output file attributes.
filename (str) –
Filename to save data to. This filename can and should specify certain python string formatting fields to differentiate between data written to the files. Any attributes provided by the
.attrs
of a DataArray object may be included. Format and conversion specifiers provided by thetrollsift
package may also be used. Any directories in the provided pattern will be created if they do not exist. Example:{platform_name}_{sensor}_{name}_{start_time:%Y%m%d_%H%M%S}.tif
base_dir (str) – Base destination directories for all created files.
kwargs (dict) – Additional keyword arguments to pass to the
Plugin
class.
-
create_filename_parser
(base_dir)[source]¶ Create a
trollsift.parser.Parser
object for later use.
-
get_filename
(**kwargs)[source]¶ Create a filename where output data will be saved.
- Parameters
kwargs (dict) – Attributes and other metadata to use for formatting the previously provided filename.
-
save_dataset
(dataset, filename=None, fill_value=None, compute=True, **kwargs)[source]¶ Save the
dataset
to a givenfilename
.This method must be overloaded by the subclass.
- Parameters
dataset (xarray.DataArray) – Dataset to save using this writer.
filename (str) – Optionally specify the filename to save this dataset to. If not provided then filename which can be provided to the init method will be used and formatted by dataset attributes.
fill_value (int or float) – Replace invalid values in the dataset with this fill value if applicable to this writer.
compute (bool) – If True (default), compute and save the dataset. If False return either a Delayed object or tuple of (source, target). See the return values below for more information.
**kwargs – Other keyword arguments for this particular writer.
- Returns
Value returned depends on compute. If compute is True then the return value is the result of computing a Delayed object or running
dask.array.store()
. If compute is False then the returned value is either a Delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed todask.array.store()
. If target is provided the the caller is responsible for calling target.close() if the target has this method.
-
save_datasets
(datasets, compute=True, **kwargs)[source]¶ Save all datasets to one or more files.
Subclasses can use this method to save all datasets to one single file or optimize the writing of individual datasets. By default this simply calls save_dataset for each dataset provided.
- Parameters
datasets (iterable) – Iterable of xarray.DataArray objects to save using this writer.
compute (bool) – If True (default), compute all of the saves to disk. If False then the return value is either a Delayed object or two lists to be passed to a
dask.array.store()
call. See return values below for more details.**kwargs – Keyword arguments to pass to save_dataset. See that documentation for more details.
- Returns
Value returned depends on compute keyword argument. If compute is True the value is the result of a either a
dask.array.store()
operation or a Delayed compute, typically this is None. If compute is False then the result is either a Delayed object that can be computed with delayed.compute() or a two element tuple of sources and targets to be passed todask.array.store()
. If targets is provided then it is the caller’s responsibility to close any objects that have a “close” method.
-
classmethod
separate_init_kwargs
(kwargs)[source]¶ Help separating arguments between init and save methods.
Currently the
Scene
is passed one set of arguments to represent the Writer creation and saving steps. This is not preferred for Writer structure, but provides a simpler interface to users. This method splits the provided keyword arguments between those needed for initialization and those needed for thesave_dataset
andsave_datasets
method calls.Writer subclasses should try to prefer keyword arguments only for the save methods only and leave the init keyword arguments to the base classes when possible.
-
satpy.writers.
add_decorate
(orig, fill_value=None, **decorate)[source]¶ Decorate an image with text and/or logos/images.
This call adds text/logos in order as given in the input to keep the alignment features available in pydecorate.
An example of the decorate config:
decorate = { 'decorate': [ {'logo': {'logo_path': <path to a logo>, 'height': 143, 'bg': 'white', 'bg_opacity': 255}}, {'text': {'txt': start_time_txt, 'align': {'top_bottom': 'bottom', 'left_right': 'right'}, 'font': <path to ttf font>, 'font_size': 22, 'height': 30, 'bg': 'black', 'bg_opacity': 255, 'line': 'white'}} ] }
Any numbers of text/logo in any order can be added to the decorate list, but the order of the list is kept as described above.
Note that a feature given in one element, eg. bg (which is the background color) will also apply on the next elements unless a new value is given.
align is a special keyword telling where in the image to start adding features, top_bottom is either top or bottom and left_right is either left or right.
-
satpy.writers.
add_logo
(orig, dc, img, logo)[source]¶ Add logos or other images to an image using the pydecorate package.
All the features of pydecorate’s
add_logo
are available. See documentation of Welcome to the Pydecorate documentation! for more info.
-
satpy.writers.
add_overlay
(orig_img, area, coast_dir, color=None, width=None, resolution=None, level_coast=None, level_borders=None, fill_value=None, grid=None, overlays=None)[source]¶ Add coastline, political borders and grid(graticules) to image.
Uses
color
for feature colors wherecolor
is a 3-element tuple of integers between 0 and 255 representing (R, G, B).Warning
This function currently loses the data mask (alpha band).
resolution
is chosen automatically if None (default), otherwise it should be one of:‘f’
Full resolution
0.04 km
‘h’
High resolution
0.2 km
‘i’
Intermediate resolution
1.0 km
‘l’
Low resolution
5.0 km
‘c’
Crude resolution
25 km
grid
is a dictionary with key values as documented in detail in pycoast- eg. overlay={‘grid’: {‘major_lonlat’: (10, 10),
‘write_text’: False, ‘outline’: (224, 224, 224), ‘width’: 0.5}}
Here major_lonlat is plotted every 10 deg for both longitude and latitude, no labels for the grid lines are plotted, the color used for the grid lines is light gray, and the width of the gratucules is 0.5 pixels.
For grid if aggdraw is used, font option is mandatory, if not
write_text
is set to False:font = aggdraw.Font('black', '/usr/share/fonts/truetype/msttcorefonts/Arial.ttf', opacity=127, size=16)
-
satpy.writers.
add_scale
(orig, dc, img, scale)[source]¶ Add scale to an image using the pydecorate package.
All the features of pydecorate’s
add_scale
are available. See documentation of Welcome to the Pydecorate documentation! for more info.
-
satpy.writers.
add_text
(orig, dc, img, text)[source]¶ Add text to an image using the pydecorate package.
All the features of pydecorate’s
add_text
are available. See documentation of Welcome to the Pydecorate documentation! for more info.
-
satpy.writers.
available_writers
(as_dict=False)[source]¶ Available writers based on current configuration.
- Parameters
as_dict (bool) – Optionally return writer information as a dictionary. Default: False
- Returns: List of available writer names. If as_dict is True then
a list of dictionaries including additionally writer information is returned.
-
satpy.writers.
compute_writer_results
(results)[source]¶ Compute all the given dask graphs results so that the files are saved.
- Parameters
results (iterable) – Iterable of dask graphs resulting from calls to scn.save_datasets(…, compute=False)
-
satpy.writers.
configs_for_writer
(writer=None, ppp_config_dir=None)[source]¶ Generate writer configuration files for one or more writers.
- Parameters
Returns: Generator of lists of configuration files
-
satpy.writers.
get_enhanced_image
(dataset, ppp_config_dir=None, enhance=None, enhancement_config_file=None, overlay=None, decorate=None, fill_value=None)[source]¶ Get an enhanced version of dataset as an
XRImage
instance.- Parameters
dataset (xarray.DataArray) – Data to be enhanced and converted to an image.
ppp_config_dir (str) – Root configuration directory.
enhance (bool or Enhancer) – Whether to automatically enhance data to be more visually useful and to fit inside the file format being saved to. By default this will default to using the enhancement configuration files found using the default
Enhancer
class. This can be set to False so that no enhancments are performed. This can also be an instance of theEnhancer
class if further custom enhancement is needed.enhancement_config_file (str) – Deprecated.
overlay (dict) – Options for image overlays. See
add_overlay()
for available options.decorate (dict) – Options for decorating the image. See
add_decorate()
for available options.fill_value (int or float) – Value to use when pixels are masked or invalid. Default of None means to create an alpha channel. See
finalize()
for more details. Only used when adding overlays or decorations. Otherwise it is up to the caller to “finalize” the image before using it except if callingimg.show()
or providing the image to a writer as these will finalize the image.
Changed in version 0.10: Deprecated enhancement_config_file and ‘enhancer’ in favor of enhance. Pass an instance of the Enhancer class to enhance instead.
-
satpy.writers.
load_writer
(writer, ppp_config_dir=None, **writer_kwargs)[source]¶ Find and load writer writer in the available configuration files.
-
satpy.writers.
load_writer_configs
(writer_configs, ppp_config_dir, **writer_kwargs)[source]¶ Load the writer from the provided writer_configs.
-
satpy.writers.
read_writer_config
(config_files, loader=<class 'yaml.loader.UnsafeLoader'>)[source]¶ Read the writer config_files and return the info extracted.
-
satpy.writers.
split_results
(results)[source]¶ Split results.
Get sources, targets and delayed objects to separate lists from a list of results collected from (multiple) writer(s).
-
satpy.writers.
to_image
(dataset)[source]¶ Convert
dataset
into aXRImage
instance.Convert the
dataset
into an instance of theXRImage
class. This function makes no other changes. To get an enhanced image, possibly with overlays and decoration, seeget_enhanced_image()
.- Parameters
dataset (xarray.DataArray) – Data to be converted to an image.
- Returns
Instance of
XRImage
.