What’s New

v0.12.3 (10 July 2019)

New functions/methods

Enhancements

Bug fixes

  • Resolved deprecation warnings from newer versions of matplotlib and dask.

  • Compatibility fixes for the upcoming pandas 0.25 and NumPy 1.17 releases.By Stephan Hoyer.

  • Fix summaries for multiindex coordinates (GH3079).By Jonas Hörsch.

  • Fix HDF5 error that could arise when reading multiple groups from a file atonce (GH2954).By Stephan Hoyer.

v0.12.2 (29 June 2019)

New functions/methods

The new combine_nested will accept the datasets as a nestedlist-of-lists, and combine by applying a series of concat and mergeoperations. The new combine_by_coords instead uses the dimensioncoordinates of datasets to order them.

open_mfdataset() can use either combine_nested orcombine_by_coords to combine datasets along multiple dimensions, byspecifying the argument combine='nested' or combine='by_coords'.

The older function auto_combine() has been deprecated,because its functionality has been subsumed by the new functions.To avoid FutureWarnings switch to using combine_nested orcombine_by_coords, (or set the combine argument inopen_mfdataset). (GH2159)By Tom Nicholas.

  • rolling_exp() androlling_exp() added, similar to pandas’pd.DataFrame.ewm method. Calling .mean on the resulting objectwill return an exponentially weighted moving average.By Maximilian Roos.

  • New DataArray.str for stringrelated manipulations, based on pandas.Series.str.By 0x0L.

  • Added strftime method to .dt accessor, making it simpler to hand adatetime DataArray to other code expecting formatted dates and times.(GH2090). strftime() is also nowavailable on CFTimeIndex.By Alan Brammer andRyan May.

  • quantile() is now a method of GroupByobjects (GH3018).By David Huard.

  • Argument and return types are added to most methods on DataArray andDataset, allowing static type checking both within xarray and externallibraries. Type checking with mypy is enabled inCI (though not required yet).By Guido Imperialeand Maximilian Roos.

Enhancements to existing functionality

  • Implement load_dataset() andload_dataarray() as alternatives toopen_dataset() and open_dataarray() toopen, load into memory, and close files, returning the Dataset or DataArray.These functions are helpful for avoiding file-lock errors when trying towrite to files opened using open_dataset() or open_dataarray().(GH2887)By Dan Nowacki.

  • It is now possible to extend existing Zarr datasets, by usingmode='a' and the new append_dim argument into_zarr().By Jendrik Jördening,David Brochart,Ryan Abernathey andShikhar Goenka.

  • xr.open_zarr now accepts manually specified chunks with the chunks=parameter. auto_chunk=True is equivalent to chunks='auto' forbackwards compatibility. The overwrite_encoded_chunks parameter isadded to remove the original zarr chunk encoding.By Lily Wang.

  • netCDF chunksizes are now only dropped when original_shape is different,not when it isn’t found. (GH2207)By Karel van de Plassche.

  • Character arrays’ character dimension name decoding and encoding handled byvar.encoding['char_dim_name'] (GH2895)By James McCreight.

  • open_rasterio() now supports rasterio.vrt.WarpedVRT with custom transform,width and height (GH2864).By Julien Michel.

Bug fixes

v0.12.1 (4 April 2019)

Enhancements

  • Allow expand_dims method to support inserting/broadcasting dimensionswith size > 1. (GH2710)By Martin Pletcher.

Bug fixes

v0.12.0 (15 March 2019)

Highlights include:

Deprecations

  • The compat argument to Dataset and the encoding argument toDataArray are deprecated and will be removed in a future release.(GH1188)By Maximilian Roos.

Other enhancements

Bug fixes

  • Silenced warnings that appear when using pandas 0.24.By Stephan Hoyer

  • Interpolating via resample now internally specifies bounds_error=Falseas an argument to scipy.interpolate.interp1d, allowing for interpolationfrom higher frequencies to lower frequencies. Datapoints outside the boundsof the original time coordinate are now filled with NaN (GH2197). BySpencer Clark.

  • Line plots with the x argument set to a non-dimensional coord now plot the correct data for 1D DataArrays.(GH27251). By Tom Nicholas.

  • Subtracting a scalar cftime.datetime object from aCFTimeIndex now results in a pandas.TimedeltaIndexinstead of raising a TypeError (GH2671). By Spencer Clark.

  • backend_kwargs are no longer ignored when using open_dataset with pynio engine(:issue:‘2380’)By Jonathan Joyce.

  • Fix open_rasterio creating a WKT CRS instead of PROJ.4 withrasterio 1.0.14+ (GH2715).By David Hoese.

  • Masking data arrays with xarray.DataArray.where() now returns anarray with the name of the original masked array (GH2748 and GH2457).By Yohai Bar-Sinai.

  • Fixed error when trying to reduce a DataArray using a function which does notrequire an axis argument. (GH2768)By Tom Nicholas.

  • Concatenating a sequence of DataArray with varying namessets the name of the output array to None, instead of the name of thefirst input array. If the names are the same it sets the name to that,instead to the name of the first DataArray in the list as it did before.(GH2775). By Tom Nicholas.

  • Per CF conventions,specifying 'standard' as the calendar type incftime_range() now correctly refers to the 'gregorian'calendar instead of the 'proleptic_gregorian' calendar (GH2761).

v0.11.3 (26 January 2019)

Bug fixes

  • Saving files with times encoded with reference dates with timezones(e.g. ‘2000-01-01T00:00:00-05:00’) no longer raises an error(GH2649). By Spencer Clark.

  • Fixed performance regression with open_mfdataset (GH2662).By Tom Nicholas.

  • Fixed supplying an explicit dimension in the concat_dim argument toto open_mfdataset (GH2647).By Ben Root.

v0.11.2 (2 January 2019)

Removes inadvertently introduced setup dependency on pytest-runner(GH2641). Otherwise, this release is exactly equivalent to 0.11.1.

Warning

This is the last xarray release that will support Python 2.7. Future releaseswill be Python 3 only, but older versions of xarray will always be availablefor Python 2.7 users. For the more details, see:

v0.11.1 (29 December 2018)

This minor release includes a number of enhancements and bug fixes, and two(slightly) breaking changes.

Breaking changes

  • Minimum rasterio version increased from 0.36 to 1.0 (for open_rasterio)

  • Time bounds variables are now also decoded according to CF conventions(GH2565). The previous behavior was to decode them only if theyhad specific time attributes, now these attributes are copiedautomatically from the corresponding time coordinate. This mightbreak downstream code that was relying on these variables to bebrake downstream code that was relying on these variables to benot decoded.By Fabien Maussion.

Enhancements

  • Ability to read and write consolidated metadata in zarr stores (GH2558).By Ryan Abernathey.

  • CFTimeIndex uses slicing for string indexing when possible (likepandas.DatetimeIndex), which avoids unnecessary copies.By Stephan Hoyer

  • Enable passing rasterio.io.DatasetReader or rasterio.vrt.WarpedVRT toopen_rasterio instead of file path string. Allows for in-memoryreprojection, see (GH2588).By Scott Henderson.

  • Like pandas.DatetimeIndex, CFTimeIndex now supports“dayofyear” and “dayofweek” accessors (GH2597). Note this requires aversion of cftime greater than 1.0.2. By Spencer Clark.

  • The option 'warn_for_unclosed_files' (False by default) has been added toallow users to enable a warning when files opened by xarray are deallocatedbut were not explicitly closed. This is mostly useful for debugging; werecommend enabling it in your test suites if you use xarray for IO.By Stephan Hoyer

  • Support Dask HighLevelGraphs by Matthew Rocklin.

  • DataArray.resample() and Dataset.resample() now supports theloffset kwarg just like Pandas.By Deepak Cherian

  • Datasets are now guaranteed to have a 'source' encoding, so the sourcefile name is always stored (GH2550).By Tom Nicholas.

  • The apply methods for DatasetGroupBy, DataArrayGroupBy,DatasetResample and DataArrayResample now support passing positionalarguments to the applied function as a tuple to the args argument.By Matti Eskelinen.

  • 0d slices of ndarrays are now obtained directly through indexing, rather thanextracting and wrapping a scalar, avoiding unnecessary copying. By DanielWennberg.

  • Added support for fill_value withshift() and shift()By Maximilian Roos

Bug fixes

  • Ensure files are automatically closed, if possible, when no longer referencedby a Python variable (GH2560).By Stephan Hoyer

  • Fixed possible race conditions when reading/writing to disk in parallel(GH2595).By Stephan Hoyer

  • Fix h5netcdf saving scalars with filters or chunks (GH2563).By Martin Raspaud.

  • Fix parsing of _Unsigned attribute set by OPENDAP servers. (GH2583).By Deepak Cherian

  • Fix failure in time encoding when exporting to netCDF with versions of pandasless than 0.21.1 (GH2623). By Spencer Clark.

  • Fix MultiIndex selection to update label and level (GH2619).By Keisuke Fujii.

v0.11.0 (7 November 2018)

Breaking changes

  • Finished deprecations (changed behavior with this release):

    • Dataset.T has been removed as a shortcut for Dataset.transpose().Call Dataset.transpose() directly instead.

    • Iterating over a Dataset now includes only data variables, not coordinates.Similarily, calling len and bool on a Dataset nowincludes only data variables.

    • DataArray.contains (used by Python’s in operator) now checksarray data, not coordinates.

    • The old resample syntax from before xarray 0.10, e.g.,data.resample('1D', dim='time', how='mean'), is no longer supported willraise an error in most cases. You need to use the new resample syntaxinstead, e.g., data.resample(time='1D').mean() ordata.resample({'time': '1D'}).mean().

  • New deprecations (behavior will be changed in xarray 0.12):

  • Refactored storage backends:

    • Xarray’s storage backends now automatically open and close files whennecessary, rather than requiring opening a file with autoclose=True. Aglobal least-recently-used cache is used to store open files; the defaultlimit of 128 open files should suffice in most cases, but can be adjusted ifnecessary withxarray.set_options(file_cache_maxsize=…). The autoclose argumentto open_dataset and related functions has been deprecated and is now ano-op.

This change, along with an internal refactor of xarray’s storage backends,should significantly improve performance when reading and writingnetCDF files with Dask, especially when working with many files or usingDask Distributed. By Stephan Hoyer

  • Support for non-standard calendars used in climate science:

    • Xarray will now always use cftime.datetime objects, ratherthan by default trying to coerce them into np.datetime64[ns] objects.A CFTimeIndex will be used for indexing along timecoordinates in these cases.

    • A new method to_datetimeindex() has been addedto aid in converting from a CFTimeIndex to apandas.DatetimeIndex for the remaining use-cases whereusing a CFTimeIndex is still a limitation (e.g. forresample or plotting).

    • Setting the enable_cftimeindex option is now a no-op and emits aFutureWarning.

Enhancements

  • xarray.DataArray.plot.line() can now accept multidimensionalcoordinate variables as input. hue must be a dimension name in this case.(GH2407)By Deepak Cherian.

  • Added support for Python 3.7. (GH2271).By Joe Hamman.

  • Added support for plotting data with pandas.Interval coordinates, such as thosecreated by groupby_bins()By Maximilian Maahn.

  • Added shift() for shifting the values of aCFTimeIndex by a specified frequency. (GH2244).By Spencer Clark.

  • Added support for using cftime.datetime coordinates withdifferentiate(),differentiate(),interp(), andinterp().By Spencer Clark

  • There is now a global option to either always keep or always discarddataset and dataarray attrs upon operations. The option is set withxarray.set_options(keep_attrs=True), and the default is to use the oldbehaviour.By Tom Nicholas.

  • Added a new backend for the GRIB file format based on ECMWF cfgrib_python driver and _ecCodes C-library. (GH2475)By Alessandro Amici,sponsored by ECMWF.

  • Resample now supports a dictionary mapping from dimension to frequency asits first argument, e.g., data.resample({'time': '1D'}).mean(). This isconsistent with other xarray functions that accept either dictionaries orkeyword arguments. By Stephan Hoyer.

  • The preferred way to access tutorial data is now to load it lazily withxarray.tutorial.opendataset().xarray.tutorial.load_dataset() calls _Dataset.load() priorto returning (and is now deprecated). This was changed in order to facilitateusing tutorial datasets with dask.By Joe Hamman.

  • DataArray can now use xr.setoption(keep_attrs=True) and retain attributes in binary operations,such as (+, -, * ,/). Default behaviour is unchanged (_Attributes will be dismissed). By Michael Blaschek

Bug fixes

  • FacetGrid now properly uses the cbar_kwargs keyword argument.(GH1504, GH1717)By Deepak Cherian.

  • Addition and subtraction operators used with a CFTimeIndex now preserve theindex’s type. (GH2244).By Spencer Clark.

  • We now properly handle arrays of datetime.datetime and datetime.timedeltaprovided as coordinates. (GH2512)By Deepak Cherian.

  • xarray.DataArray.roll correctly handles multidimensional arrays.(GH2445)By Keisuke Fujii.

  • xarray.plot() now properly accepts a norm argument and does not overridethe norm’s vmin and vmax. (GH2381)By Deepak Cherian.

  • xarray.DataArray.std() now correctly accepts ddof keyword argument.(GH2240)By Keisuke Fujii.

  • Restore matplotlib’s default of plotting dashed negative contours whena single color is passed to DataArray.contour() e.g. colors='k'.By Deepak Cherian.

  • Fix a bug that caused some indexing operations on arrays opened withopen_rasterio to error (GH2454).By Stephan Hoyer.

  • Subtracting one CFTimeIndex from another now returns apandas.TimedeltaIndex, analogous to the behavior for DatetimeIndexes(GH2484). By Spencer Clark.

  • Adding a TimedeltaIndex to, or subtracting a TimedeltaIndex from aCFTimeIndex is now allowed (GH2484).By Spencer Clark.

  • Avoid use of Dask’s deprecated get= parameter in testsby Matthew Rocklin.

  • An OverflowError is now accurately raised and caught during theencoding process if a reference date is used that is so distant thatthe dates must be encoded using cftime rather than NumPy (GH2272).By Spencer Clark.

  • Chunked datasets can now roundtrip to Zarr storage continuallywith to_zarr and open_zarr (GH2300).By Lily Wang.

v0.10.9 (21 September 2018)

This minor release contains a number of backwards compatible enhancements.

Announcements of note:

  • Xarray is now a NumFOCUS fiscally sponsored project! Readthe anouncementfor more details.

  • We have a new Development roadmap that outlines our future development plans.

  • Dataset.apply now properly documents the way func is called.By Matti Eskelinen.

Enhancements

  • differentiate() anddifferentiate() are newly added.(GH1332)By Keisuke Fujii.

  • Default colormap for sequential and divergent data can now be set viaset_options()(GH2394)By Julius Busecke.

  • min_count option is newly supported in sum(),prod() and sum(), andprod().(GH2230)By Keisuke Fujii.

  • plot() now accepts the kwargsxscale, yscale, xlim, ylim, xticks, yticks just like Pandas. Also xincrease=False, yincrease=False now use matplotlib’s axis inverting methods instead of setting limits.By Deepak Cherian. (GH2224)

  • DataArray coordinates and Dataset coordinates and data variables arenow displayed as a b … y z rather than a b c d ….(GH1186)By Seth P.

  • A new CFTimeIndex-enabled cftime_range() function for use ingenerating dates from standard or non-standard calendars. By Spencer Clark.

  • When interpolating over a datetime64 axis, you can now provide a datetime string instead of a datetime64 object. E.g. da.interp(time='1991-02-01')(GH2284)By Deepak Cherian.

  • A clear error message is now displayed if a set or dict is passed in place of an array(GH2331)By Maximilian Roos.

  • Applying unstack to a large DataArray or Dataset is now much faster if the MultiIndex has not been modified after stacking the indices.(GH1560)By Maximilian Maahn.

  • You can now control whether or not to offset the coordinates when usingthe roll method and the current behavior, coordinates rolled by default,raises a deprecation warning unless explicitly setting the keyword argument.(GH1875)By Andrew Huang.

  • You can now call unstack without arguments to unstack every MultiIndex in a DataArray or Dataset.By Julia Signell.

  • Added the ability to pass a data kwarg to copy to create a new object with thesame metadata as the original object but using new values.By Julia Signell.

Bug fixes

  • xarray.plot.imshow() correctly uses the origin argument.(GH2379)By Deepak Cherian.

  • Fixed DataArray.to_iris() failure while creating DimCoord byfalling back to creating AuxCoord. Fixed dependency on var_nameattribute being set.(GH2201)By Thomas Voigt.

  • Fixed a bug in zarr backend which prevented use with datasets withinvalid chunk size encoding after reading from an existing store(GH2278).By Joe Hamman.

  • Tests can be run in parallel with pytest-xdistBy Tony Tung.

  • Follow up the renamings in dask; from dask.ghost to dask.overlapBy Keisuke Fujii.

  • Now raises a ValueError when there is a conflict between dimension names andlevel names of MultiIndex. (GH2299)By Keisuke Fujii.

  • Follow up the renamings in dask; from dask.ghost to dask.overlapBy Keisuke Fujii.

  • Now xr.apply_ufunc() raises a ValueError when the size ofinput_core_dims is inconsistent with the number of arguments.(GH2341)By Keisuke Fujii.

  • Fixed Dataset.filter_by_attrs() behavior not matching netCDF4.Dataset.get_variables_by_attributes().When more than one key=value is passed into Dataset.filter_by_attrs() it will now return a Dataset with variables which passall the filters.(GH2315)By Andrew Barna.

v0.10.8 (18 July 2018)

Breaking changes

  • Xarray no longer supports python 3.4. Additionally, the minimum supportedversions of the following dependencies has been updated and/or clarified:

    • Pandas: 0.18 -> 0.19

    • NumPy: 1.11 -> 1.12

    • Dask: 0.9 -> 0.16

    • Matplotlib: unspecified -> 1.5

(GH2204). By Joe Hamman.

Enhancements

Bug fixes

v0.10.7 (7 June 2018)

Enhancements

Bug fixes

  • Fixed a bug in rasterio backend which prevented use with distributed.The rasterio backend now returns pickleable objects (GH2021).By Joe Hamman.

v0.10.6 (31 May 2018)

The minor release includes a number of bug-fixes and backwards compatibleenhancements.

Enhancements

Bug fixes

  • Fixed a regression in 0.10.4, where explicitly specifying dtype='S1' ordtype=str in encoding with to_netcdf() raised an error(GH2149).Stephan Hoyer

  • apply_ufunc() now directly validates output variables(GH1931).By Stephan Hoyer.

  • Fixed a bug where to_netcdf(…, unlimited_dims='bar') yielded NetCDFfiles with spurious 0-length dimensions (i.e. b, a, and r)(GH2134).By Joe Hamman.

  • Removed spurious warnings with Dataset.update(Dataset) (GH2161)and array.equals(array) when array contains NaT (GH2162).By Stephan Hoyer.

  • Aggregations with Dataset.reduce() (including mean, sum,etc) no longer drop unrelated coordinates (GH1470). Also fixed abug where non-scalar data-variables that did not include the aggregationdimension were improperly skipped.By Stephan Hoyer

  • Fix stack() with non-unique coordinates on pandas 0.23(GH2160).By Stephan Hoyer

  • Selecting data indexed by a length-1 CFTimeIndex with a slice of stringsnow behaves as it does when using a length-1 DatetimeIndex (i.e. it nolonger falsely returns an empty array when the slice includes the value inthe index) (GH2165).By Spencer Clark.

  • Fix DataArray.groupby().reduce() mutating coordinates on the input arraywhen grouping over dimension coordinates with duplicated entries(GH2153).By Stephan Hoyer

  • Fix Dataset.to_netcdf() cannot create group with engine="h5netcdf"(GH2177).By Stephan Hoyer

v0.10.4 (16 May 2018)

The minor release includes a number of bug-fixes and backwards compatibleenhancements. A highlight is CFTimeIndex, which offers support fornon-standard calendars used in climate modeling.

Documentation

Enhancements

  • Add an option for using a CFTimeIndex for indexing times withnon-standard calendars and/or outside the Timestamp-valid range; this indexenables a subset of the functionality of a standardpandas.DatetimeIndex.See Non-standard calendars and dates outside the Timestamp-valid range for full details.(GH789, GH1084, GH1252)By Spencer Clark with help fromStephan Hoyer.

  • Allow for serialization of cftime.datetime objects (GH789,GH1084, GH2008, GH1252) using the standalone cftimelibrary.By Spencer Clark.

  • Support writing lists of strings as netCDF attributes (GH2044).By Dan Nowacki.

  • to_netcdf() with engine='h5netcdf' now accepts h5pyencoding settings compression and compression_opts, along with theNetCDF4-Python style settings gzip=True and complevel.This allows using any compression plugin installed in hdf5, e.g. LZF(GH1536). By Guido Imperiale.

  • dot() on dask-backed data will now call dask.array.einsum().This greatly boosts speed and allows chunking on the core dims.The function now requires dask >= 0.17.3 to work on dask-backed data(GH2074). By Guido Imperiale.

  • plot.line() learned new kwargs: xincrease, yincrease that changethe direction of the respective axes.By Deepak Cherian.

  • Added the parallel option to open_mfdataset(). This option usesdask.delayed to parallelize the open and preprocessing steps withinopen_mfdataset. This is expected to provide performance improvements whenopening many files, particularly when used in conjunction with dask’smultiprocessing or distributed schedulers (GH1981).By Joe Hamman.

  • New compute option in to_netcdf(),to_zarr(), and save_mfdataset() toallow for the lazy computation of netCDF and zarr stores. This feature iscurrently only supported by the netCDF4 and zarr backends. (GH1784).By Joe Hamman.

Bug fixes

v0.10.3 (13 April 2018)

The minor release includes a number of bug-fixes and backwards compatible enhancements.

Enhancements

  • isin() and isin() methods,which test each value in the array for whether it is contained in thesupplied list, returning a bool array. See Selecting values with isinfor full details. Similar to the np.isin function.By Maximilian Roos.

  • Some speed improvement to construct DataArrayRollingobject (GH1993)By Keisuke Fujii.

  • Handle variables with different values for missing_value and_FillValue by masking values for both attributes; previously thisresulted in a ValueError. (GH2016)By Ryan May.

Bug fixes

  • Fixed decode_cf function to operate lazily on dask arrays(GH1372). By Ryan Abernathey.

  • Fixed labeled indexing with slice bounds given by xarray objects withdatetime64 or timedelta64 dtypes (GH1240).By Stephan Hoyer.

  • Attempting to convert an xarray.Dataset into a numpy array now raises aninformative error message.By Stephan Hoyer.

  • Fixed a bug in decode_cf_datetime where int32 arrays weren’t parsedcorrectly (GH2002).By Fabien Maussion.

  • When calling xr.auto_combine() or xr.open_mfdataset() with a concat_dim,the resulting dataset will have that one-element dimension (it wassilently dropped, previously) (GH1988).By Ben Root.

v0.10.2 (13 March 2018)

The minor release includes a number of bug-fixes and enhancements, along withone possibly backwards incompatible change.

Backwards incompatible changes

  • The addition of array_ufunc for xarray objects (see below) means thatNumPy ufunc methods (e.g., np.add.reduce) that previously worked onxarray.DataArray objects by converting them into NumPy arrays will nowraise NotImplementedError instead. In all cases, the work-around issimple: convert your objects explicitly into NumPy arrays before calling theufunc (e.g., with .values).

Enhancements

  • Added dot(), equivalent to np.einsum().Also, dot() now supports dims option,which specifies the dimensions to sum over.(GH1951)By Keisuke Fujii.

  • Support for writing xarray datasets to netCDF files (netcdf4 backend only)when using the dask.distributedscheduler (GH1464).By Joe Hamman.

  • Support lazy vectorized-indexing. After this change, flexible indexing suchas orthogonal/vectorized indexing, becomes possible for all the backendarrays. Also, lazy transpose is now also supported. (GH1897)By Keisuke Fujii.

  • Implemented NumPy’s array_ufunc protocol for all xarray objects(GH1617). This enables using NumPy ufuncs directly onxarray.Dataset objects with recent versions of NumPy (v1.13 and newer):

  1. In [1]: ds = xr.Dataset({'a': 1})
  2.  
  3. In [2]: np.sin(ds)
  4. Out[2]:
  5. <xarray.Dataset>
  6. Dimensions: ()
  7. Data variables:
  8. a float64 0.8415

This obliviates the need for the xarray.ufuncs module, which will bedeprecated in the future when xarray drops support for older versions ofNumPy. By Stephan Hoyer.

  • Improve rolling() logic.DataArrayRolling() object now supportsconstruct() method that returns a viewof the DataArray / Dataset object with the rolling-window dimension addedto the last axis. This enables more flexible operation, such as stridedrolling, windowed rolling, ND-rolling, short-time FFT and convolution.(GH1831, GH1142, GH819)By Keisuke Fujii.

  • line() learned to make plots with data on x-axis if so specified. (GH575)By Deepak Cherian.

Bug fixes

v0.10.1 (25 February 2018)

The minor release includes a number of bug-fixes and backwards compatible enhancements.

Documentation

Enhancements

New functions and methods:

Plotting enhancements:

Other enhancements:

  1. In [3]: da = xr.DataArray(np.array([True, False, np.nan], dtype=object), dims='x')
  2.  
  3. In [4]: da.sum()
  4. Out[4]:
  5. <xarray.DataArray ()>
  6. array(1)

(GH1866)By Keisuke Fujii.

  • Reduce methods such as DataArray.sum() now accepts dtypearguments. (GH1838)By Keisuke Fujii.

  • Added nodatavals attribute to DataArray when using open_rasterio(). (GH1736).By Alan Snow.

  • Use pandas.Grouper class in xarray resample methods rather than thedeprecated pandas.TimeGrouper class (GH1766).By Joe Hamman.

  • Experimental support for parsing ENVI metadata to coordinates and attributesin xarray.open_rasterio().By Matti Eskelinen.

  • Reduce memory usage when decoding a variable with a scale_factor, byconverting 8-bit and 16-bit integers to float32 instead of float64(PR1840), and keeping float16 and float32 as float32 (GH1842).Correspondingly, encoded variables may also be saved with a smaller dtype.By Zac Hatfield-Dodds.

  • Speed of reindexing/alignment with dask array is orders of magnitude fasterwhen inserting missing values (GH1847).By Stephan Hoyer.

  • Fix axis keyword ignored when applying np.squeeze to DataArray (GH1487).By Florian Pinault.

  • netcdf4-python has moved the its time handling in the netcdftime module toa standalone package (netcdftime). As such, xarray now considers netcdftimean optional dependency. One benefit of this change is that it allows forencoding/decoding of datetimes with non-standard calendars without thenetcdf4-python dependency (GH1084).By Joe Hamman.

New functions/methods

Bug fixes

  • Rolling aggregation with center=True option now gives the same resultwith pandas including the last element (GH1046).By Keisuke Fujii.

  • Support indexing with a 0d-np.ndarray (GH1921).By Keisuke Fujii.

  • Added warning in api.py of a netCDF4 bug that occurs whenthe filepath has 88 characters (GH1745).By Liam Brannigan.

  • Fixed encoding of multi-dimensional coordinates into_netcdf() (GH1763).By Mike Neish.

  • Fixed chunking with non-file-based rasterio datasets (GH1816) andrefactored rasterio test suite.By Ryan Abernathey

  • Bug fix in open_dataset(engine=’pydap’) (GH1775)By Keisuke Fujii.

  • Bug fix in vectorized assignment (GH1743, GH1744).Now item assignment to setitem() checks

  • Bug fix in vectorized assignment (GH1743, GH1744).Now item assignment to DataArray.setitem() checkscoordinates of target, destination and keys. If there are any conflict amongthese coordinates, IndexError will be raised.By Keisuke Fujii.

  • Properly point DataArray.dask_scheduler() todask.threaded.get. By Matthew Rocklin.

  • Bug fixes in DataArray.plot.imshow(): all-NaN arrays and arrayswith size one in some dimension can now be plotted, which is good forexploring satellite imagery (GH1780).By Zac Hatfield-Dodds.

  • Fixed UnboundLocalError when opening netCDF file (GH1781).By Stephan Hoyer.

  • The variables, attrs, and dimensions properties have beendeprecated as part of a bug fix addressing an issue where backends wereunintentionally loading the datastores data and attributes repeatedly duringwrites (GH1798).By Joe Hamman.

  • Compatibility fixes to plotting module for Numpy 1.14 and Pandas 0.22(GH1813).By Joe Hamman.

  • Bug fix in encoding coordinates with {'_FillValue': None} in netCDFmetadata (GH1865).By Chris Roth.

  • Fix indexing with lists for arrays loaded from netCDF files withengine='h5netcdf (GH1864).By Stephan Hoyer.

  • Corrected a bug with incorrect coordinates for non-georeferenced geotifffiles (GH1686). Internally, we now use the rasterio coordinatetransform tool instead of doing the computations ourselves. Aparse_coordinates kwarg has beed added to open_rasterio()(set to True per default).By Fabien Maussion.

  • The colors of discrete colormaps are now the same regardless if _seaborn_is installed or not (GH1896).By Fabien Maussion.

  • Fixed dtype promotion rules in where() and concat() tomatch pandas (GH1847). A combination of strings/numbers orunicode/bytes now promote to object dtype, instead of strings or unicode.By Stephan Hoyer.

  • Fixed bug where isnull() was loading datastored as dask arrays (GH1937).By Joe Hamman.

v0.10.0 (20 November 2017)

This is a major release that includes bug fixes, new features and a fewbackwards incompatible changes. Highlights include:

  • Indexing now supports broadcasting over dimensions, similar to NumPy’svectorized indexing (but better!).

  • resample() has a new groupby-like API like pandas.

  • apply_ufunc() facilitates wrapping and parallelizingfunctions written for NumPy arrays.

  • Performance improvements, particularly for dask and open_mfdataset().

Breaking changes

  • xarray now supports a form of vectorized indexing with broadcasting, wherethe result of indexing depends on dimensions of indexers,e.g., array.sel(x=ind) with ind.dims == ('y',). Alignment betweencoordinates on indexed and indexing objects is also now enforced.Due to these changes, existing uses of xarray objects to index other xarrayobjects will break in some cases.

The new indexing API is much more powerful, supporting outer, diagonal andvectorized indexing in a single interface.The isel_points and sel_points methods are deprecated, since they arenow redundant with the isel / sel methods.See Vectorized Indexing for the details (GH1444,GH1436).By Keisuke Fujii andStephan Hoyer.

Old syntax:

  1. In [5]: ds.resample('24H', dim='time', how='max')
  2. Out[5]:
  3. <xarray.Dataset>
  4. [...]

New syntax:

  1. In [6]: ds.resample(time='24H').max()
  2. Out[6]:
  3. <xarray.Dataset>
  4. [...]

Note that both versions are currently supported, but using the old syntax willproduce a warning encouraging users to adopt the new syntax.By Daniel Rothenberg.

  • Calling repr() or printing xarray objects at the command line or in aJupyter Notebook will not longer automatically compute dask variables orload data on arrays lazily loaded from disk (GH1522).By Guido Imperiale.

  • Supplying coords as a dictionary to the DataArray constructor withoutalso supplying an explicit dims argument is no longer supported. Thisbehavior was deprecated in version 0.9 but will now raise an error(GH727).

  • Several existing features have been deprecated and will change to newbehavior in xarray v0.11. If you use any of them with xarray v0.10, youshould see a FutureWarning that describes how to update your code:

    • Dataset.T has been deprecated an alias for Dataset.transpose()(GH1232). In the next major version of xarray, it will provide short-cut lookup for variables or attributes with name 'T'.

    • DataArray.contains (e.g., key in data_array) currently checksfor membership in DataArray.coords. In the next major version ofxarray, it will check membership in the array data found inDataArray.values instead (GH1267).

    • Direct iteration over and counting a Dataset (e.g., [k for k in ds],ds.keys(), ds.values(), len(ds) and if ds) currentlyincludes all variables, both data and coordinates. For improved usabilityand consistency with pandas, in the next major version of xarray these willchange to only include data variables (GH884). Use ds.variables,ds.data_vars or ds.coords as alternatives.

  • Changes to minimum versions of dependencies:

Enhancements

New functions/methods

  1. In [7]: import xarray as xr
  2.  
  3. In [8]: arr = xr.DataArray([[1, 2, 3], [4, 5, 6]], dims=('x', 'y'))
  4.  
  5. In [9]: xr.where(arr % 2, 'even', 'odd')
  6. Out[9]:
  7. <xarray.DataArray (x: 2, y: 3)>
  8. array([['even', 'odd', 'even'],
  9. ['odd', 'even', 'odd']],
  10. dtype='<U4')
  11. Dimensions without coordinates: x, y

Equivalently, the where() method also now supportsthe other argument, for filling with a value other than NaN(GH576).By Stephan Hoyer.

Performance improvements

  • concat() was computing variables that aren’t in memory(e.g. dask-based) multiple times; open_mfdataset()was loading them multiple times from disk. Now, both functions will insteadload them at most once and, if they do, store them in memory in theconcatenated array/dataset (GH1521).By Guido Imperiale.

  • Speed-up (x 100) of decode_cf_datetime().By Christian Chwala.

IO related improvements

  1. In [10]: from pathlib import Path # In Python 2, use pathlib2!
  2.  
  3. In [11]: data_dir = Path("data/")
  4.  
  5. In [12]: one_file = data_dir / "dta_for_month_01.nc"
  6.  
  7. In [13]: xr.open_dataset(one_file)
  8. Out[13]:
  9. <xarray.Dataset>
  10. [...]

By Willi Rath.

  • You can now explicitly disable any default _FillValue (NaN forfloating point values) by passing the enconding {'_FillValue': None}(GH1598).By Stephan Hoyer.

  • More attributes available in attrs dictionary whenraster files are opened with open_rasterio().By Greg Brener.

  • Support for NetCDF files using an _Unsigned attribute to indicate that aa signed integer data type should be interpreted as unsigned bytes(GH1444).By Eric Bruning.

  • Support using an existing, opened netCDF4 Dataset withNetCDF4DataStore. This permits creating anDataset from a netCDF4 Dataset that has been opened usingother means (GH1459).By Ryan May.

  • Changed PydapDataStore to take a Pydap dataset.This permits opening Opendap datasets that require authentication, byinstantiating a Pydap dataset with a session object. Also addedxarray.backends.PydapDataStore.open() which takes a url and sessionobject (GH1068).By Philip Graae.

  • Support reading and writing unlimited dimensions with h5netcdf (GH1636).By Joe Hamman.

Other improvements

  • Added ipython_key_completions to xarray objects, to enableautocompletion for dictionary-like access in IPython, e.g.,ds['tem + tab -> ds['temperature'] (GH1628).By Keisuke Fujii.

  • Support passing keyword arguments to load, compute, and persistmethods. Any keyword arguments supplied to these methods are passed on tothe corresponding dask function (GH1523).By Joe Hamman.

  • Encoding attributes are now preserved when xarray objects are concatenated.The encoding is copied from the first object (GH1297).By Joe Hamman andGerrit Holl.

  • Support applying rolling window operations using bottleneck’s moving windowfunctions on data stored as dask arrays (GH1279).By Joe Hamman.

  • Experimental support for the Dask collection interface (GH1674).By Matthew Rocklin.

Bug fixes

  • Suppress RuntimeWarning issued by numpy for “invalid value comparisons”(e.g. NaN). Xarray now behaves similarly to Pandas in its treatment ofbinary and unary operations on objects with NaNs (GH1657).By Joe Hamman.

  • Unsigned int support for reduce methods with skipna=True(GH1562).By Keisuke Fujii.

  • Fixes to ensure xarray works properly with pandas 0.21:

By Stephan Hoyer.

Bug fixes after rc1

  • Suppress warning in IPython autocompletion, related to the deprecationof .T attributes (GH1675).By Keisuke Fujii.

  • Fix a bug in lazily-indexing netCDF array. (GH1688)By Keisuke Fujii.

  • (Internal bug) MemoryCachedArray now supports the orthogonal indexing.Also made some internal cleanups around array wrappers (GH1429).By Keisuke Fujii.

  • (Internal bug) MemoryCachedArray now always wraps np.ndarray byNumpyIndexingAdapter. (GH1694)By Keisuke Fujii.

  • Fix importing xarray when running Python with -OO (GH1706).By Stephan Hoyer.

  • Saving a netCDF file with a coordinates with a spaces in its names now raisesan appropriate warning (GH1689).By Stephan Hoyer.

  • Fix two bugs that were preventing dask arrays from being specified ascoordinates in the DataArray constructor (GH1684).By Joe Hamman.

  • Fixed apply_ufunc with dask='parallelized' for scalar arguments(GH1697).By Stephan Hoyer.

  • Fix “Chunksize cannot exceed dimension size” error when writing netCDF4 filesloaded from disk (GH1225).By Stephan Hoyer.

  • Validate the shape of coordinates with names matching dimensions in theDataArray constructor (GH1709).By Stephan Hoyer.

  • Raise NotImplementedError when attempting to save a MultiIndex to anetCDF file (GH1547).By Stephan Hoyer.

  • Remove netCDF dependency from rasterio backend tests.By Matti Eskelinen

Bug fixes after rc2

  • Fixed unexpected behavior in Dataset.set_index() andDataArray.set_index() introduced by Pandas 0.21.0. Setting a newindex with a single variable resulted in 1-levelpandas.MultiIndex instead of a simple pandas.Index(GH1722). By Benoit Bovy.

  • Fixed unexpected memory loading of backend arrays after print.(GH1720). By Keisuke Fujii.

v0.9.6 (8 June 2017)

This release includes a number of backwards compatible enhancements and bugfixes.

Enhancements

Bug fixes

  • Fix error from repeated indexing of datasets loaded from disk (GH1374).By Stephan Hoyer.

  • Fix a bug where .isel_points wrongly assigns unselected coordinate todata_vars.By Keisuke Fujii.

  • Tutorial datasets are now checked against a reference MD5 sum to confirmsuccessful download (GH1392). By Matthew Gidden.

  • DataArray.chunk() now accepts dask specific kwargs likeDataset.chunk() does. By Fabien Maussion.

  • Support for engine='pydap' with recent releases of Pydap (3.2.2+),including on Python 3 (GH1174).

Documentation

Testing

  • Fix test suite failure caused by changes to pandas.cut function(GH1386).By Ryan Abernathey.

  • Enhanced tests suite by use of @network decorator, which iscontrolled via —run-network-tests command line argumentto py.test (GH1393).By Matthew Gidden.

v0.9.5 (17 April, 2017)

Remove an inadvertently introduced print statement.

v0.9.3 (16 April, 2017)

This minor release includes bug-fixes and backwards compatible enhancements.

Enhancements

Bug fixes

  • Fix .where() with drop=True when arguments do not have indexes(GH1350). This bug, introduced in v0.9, resulted in xarray producingincorrect results in some cases.By Stephan Hoyer.

  • Fixed writing to file-like objects with to_netcdf()(GH1320).Stephan Hoyer.

  • Fixed explicitly setting engine='scipy' with to_netcdf when notproviding a path (GH1321).Stephan Hoyer.

  • Fixed open_dataarray does not pass properly its parameters to open_dataset(GH1359).Stephan Hoyer.

  • Ensure test suite works when runs from an installed version of xarray(GH1336). Use @pytest.mark.slow instead of a custom flag to markslow tests.By Stephan Hoyer

v0.9.2 (2 April 2017)

The minor release includes bug-fixes and backwards compatible enhancements.

Enhancements

  • rolling on Dataset is now supported (GH859).

  • .rolling() on Dataset is now supported (GH859).By Keisuke Fujii.

  • When bottleneck version 1.1 or later is installed, use bottleneck for rollingvar, argmin, argmax, and rank computations. Also, rollingmedian now accepts a min_periods argument (GH1276).By Joe Hamman.

  • When .plot() is called on a 2D DataArray and only one dimension isspecified with x= or y=, the other dimension is now guessed(GH1291).By Vincent Noel.

  • Added new method assign_attrs() to DataArray andDataset, a chained-method compatible implementation of thedict.update method on attrs (GH1281).By Henry S. Harrison.

  • Added new autoclose=True argument toopen_mfdataset() to explicitly close opened files when not inuse to prevent occurrence of an OS Error related to too many open files(GH1198).Note, the default is autoclose=False, which is consistent withprevious xarray behavior.By Phillip J. Wolfram.

  • The repr() of Dataset and DataArray attributes uses a similarformat to coordinates and variables, with vertically aligned entriestruncated to fit on a single line (GH1319). Hopefully this will stoppeople writing data.attrs = {} and discarding metadata in notebooks forthe sake of cleaner output. The full metadata is still available asdata.attrs.By Zac Hatfield-Dodds.

  • Enhanced tests suite by use of @slow and @flaky decorators, which arecontrolled via —run-flaky and —skip-slow command line argumentsto py.test (GH1336).By Stephan Hoyer andPhillip J. Wolfram.

  • New aggregation on rolling objects DataArray.rolling(…).count()which providing a rolling count of valid values (GH1138).

Bug fixes

v0.9.1 (30 January 2017)

Renamed the “Unindexed dimensions” section in the Dataset andDataArray repr (added in v0.9.0) to “Dimensions without coordinates”(GH1199).

v0.9.0 (25 January 2017)

This major release includes five months worth of enhancements and bug fixes from24 contributors, including some significant changes that are not fully backwardscompatible. Highlights include:

Breaking changes

  • Index coordinates for each dimensions are now optional, and no longer createdby default GH1017. You can identify such dimensions without coordinatesby their appearance in list of “Dimensions without coordinates” in theDataset or DataArray repr:
  1. In [14]: xr.Dataset({'foo': (('x', 'y'), [[1, 2]])})
  2. Out[14]:
  3. <xarray.Dataset>
  4. Dimensions: (x: 1, y: 2)
  5. Dimensions without coordinates: x, y
  6. Data variables:
  7. foo (x, y) int64 1 2

This has a number of implications:

  • align() and reindex() can now error, ifdimensions labels are missing and dimensions have different sizes.

  • Because pandas does not support missing indexes, methods such asto_dataframe/from_dataframe and stack/unstack no longerroundtrip faithfully on all inputs. Use reset_index() toremove undesired indexes.

  • Dataset.delitem and drop() no longer delete/dropvariables that have dimensions matching a deleted/dropped variable.

  • DataArray.coords.delitem is now allowed on variables matchingdimension names.

  • .sel and .loc now handle indexing along a dimension withoutcoordinate labels by doing integer based indexing. SeeMissing coordinate labels for an example.

  • indexes is no longer guaranteed to include alldimensions names as keys. The new method get_index() hasbeen added to get an index for a dimension guaranteed, falling back toproduce a default RangeIndex if necessary.

  • The default behavior of merge is now compat='no_conflicts', so somemerges will now succeed in cases that previously raisedxarray.MergeError. Set compat='broadcast_equals' to restore theprevious default. See Merging with ‘no_conflicts’ for more details.

  • Reading values no longer always caches values in a NumPyarray GH1128. Caching of .values on variables read from netCDFfiles on disk is still the default when open_dataset() is called withcache=True.By Guido Imperiale andStephan Hoyer.

  • Pickling a Dataset or DataArray linked to a file on disk no longercaches its values into memory before pickling (GH1128). Instead, picklestores file paths and restores objects by reopening file references. Thisenables preliminary, experimental use of xarray for opening files withdask.distributed.By Stephan Hoyer.

  • Coordinates used to index a dimension are now loaded eagerly intopandas.Index objects, instead of loading the values lazily.By Guido Imperiale.

  • Automatic levels for 2d plots are now guaranteed to land on vmin andvmax when these kwargs are explicitly provided (GH1191). Theautomated level selection logic also slightly changed.By Fabien Maussion.

  • DataArray.rename() behavior changed to strictly change the DataArray.nameif called with string argument, or strictly change coordinate names if called withdict-like argument.By Markus Gonser.

  • By default to_netcdf() add a _FillValue = NaN attributes to float types.By Frederic Laliberte.

  • repr on DataArray objects uses an shortened display for NumPy arraydata that is less likely to overflow onto multiple pages (GH1207).By Stephan Hoyer.

  • xarray no longer supports python 3.3, versions of dask prior to v0.9.0,or versions of bottleneck prior to v1.0.

Deprecations

  • Renamed the Coordinate class from xarray’s low level API toIndexVariable. Variable.to_variable andVariable.to_coord have been renamed toto_base_variable() andto_index_variable().

  • Deprecated supplying coords as a dictionary to the DataArrayconstructor without also supplying an explicit dims argument. The oldbehavior encouraged relying on the iteration order of dictionaries, which isa bad practice (GH727).

  • Removed a number of methods deprecated since v0.7.0 or earlier:load_data, vars, drop_vars, dump, dumps and thevariables keyword argument to Dataset.

  • Removed the dummy module that enabled import xray.

Enhancements

Bug fixes

  • groupby_bins now restores empty bins by default (GH1019).By Ryan Abernathey.

  • Fix issues for dates outside the valid range of pandas timestamps(GH975). By Mathias Hauser.

  • Unstacking produced flipped array after stacking decreasing coordinate values(GH980).By Stephan Hoyer.

  • Setting dtype via the encoding parameter of to_netcdf failed ifthe encoded dtype was the same as the dtype of the original array(GH873).By Stephan Hoyer.

  • Fix issues with variables where both attributes _FillValue andmissing_value are set to NaN (GH997).By Marco Zühlke.

  • .where() and .fillna() now preserve attributes (GH1009).By Fabien Maussion.

  • Applying broadcast() to an xarray object based on the dask backendwon’t accidentally convert the array from dask to numpy anymore (GH978).By Guido Imperiale.

  • Dataset.concat() now preserves variables order (GH1027).By Fabien Maussion.

  • Fixed an issue with pcolormesh (GH781). A newinfer_intervals keyword gives control on whether the cell intervalsshould be computed or not.By Fabien Maussion.

  • Grouping over an dimension with non-unique values with groupby givescorrect groups.By Stephan Hoyer.

  • Fixed accessing coordinate variables with non-string names from .coords.By Stephan Hoyer.

  • rename() now simultaneously renames the array andany coordinate with the same name, when supplied via a dict(GH1116).By Yves Delley.

  • Fixed sub-optimal performance in certain operations with object arrays (GH1121).By Yves Delley.

  • Fix .groupby(group) when group has datetime dtype (GH1132).By Jonas Sølvsteen.

  • Fixed a bug with facetgrid (the norm keyword was ignored, GH1159).By Fabien Maussion.

  • Resolved a concurrency bug that could cause Python to crash whensimultaneously reading and writing netCDF4 files with dask (GH1172).By Stephan Hoyer.

  • Fix to make .copy() actually copy dask arrays, which will be relevant forfuture releases of dask in which dask arrays will be mutable (GH1180).By Stephan Hoyer.

  • Fix opening NetCDF files with multi-dimensional time variables(GH1229).By Stephan Hoyer.

Performance improvements

  • isel_points() andsel_points() now use vectorised indexing in numpyand dask (GH1161), which can result in several orders of magnitudespeedup.By Jonathan Chambers.

v0.8.2 (18 August 2016)

This release includes a number of bug fixes and minor enhancements.

Breaking changes

Enhancements

Bug fixes

  • Ensure xarray works with h5netcdf v0.3.0 for arrays with dtype=str(GH953). By Stephan Hoyer.

  • Dataset.dir() (i.e. the method python calls to get autocompleteoptions) failed if one of the dataset’s keys was not a string (GH852).By Maximilian Roos.

  • Dataset constructor can now take arbitrary objects as values(GH647). By Maximilian Roos.

  • Clarified copy argument for reindex() andalign(), which now consistently always return new xarrayobjects (GH927).

  • Fix open_mfdataset with engine='pynio' (GH936).By Stephan Hoyer.

  • groupby_bins sorted bin labels as strings (GH952).By Stephan Hoyer.

  • Fix bug introduced by v0.8.0 that broke assignment to datasets when both theleft and right side have the same non-unique index values (GH956).

v0.8.1 (5 August 2016)

Bug fixes

  • Fix bug in v0.8.0 that broke assignment to Datasets with non-uniqueindexes (GH943). By Stephan Hoyer.

v0.8.0 (2 August 2016)

This release includes four months of new features and bug fixes, includingseveral breaking changes.

Breaking changes

  • Dropped support for Python 2.6 (GH855).

  • Indexing on multi-index now drop levels, which is consistent with pandas.It also changes the name of the dimension / coordinate when the multi-index isreduced to a single index (GH802).

  • Contour plots no longer add a colorbar per default (GH866). Filledcontour plots are unchanged.

  • DataArray.values and .data now always returns an NumPy array-likeobject, even for 0-dimensional arrays with object dtype (GH867).Previously, .values returned native Python objects in such cases. Toconvert the values of scalar arrays to Python objects, use the .item()method.

Enhancements

  • Groupby operations now support grouping over multidimensional variables. A newmethod called groupby_bins() has also been added toallow users to specify bins for grouping. The new features are described inMultidimensional Grouping and Working with Multidimensional Coordinates.By Ryan Abernathey.

  • DataArray and Dataset method where() now supports a drop=Trueoption that clips coordinate elements that are fully masked. ByPhillip J. Wolfram.

  • New top level merge() function allows for combining variables fromany number of Dataset and/or DataArray variables. See Mergefor more details. By Stephan Hoyer.

  • DataArray and Dataset method resample() now supports thekeep_attrs=False option that determines whether variable and datasetattributes are retained in the resampled object. ByJeremy McGibbon.

  • Better multi-index support in DataArray and Dataset sel() andloc() methods, which now behave more closely to pandas and whichalso accept dictionaries for indexing based on given level names and labels(see Multi-level indexing). ByBenoit Bovy.

  • New (experimental) decorators register_dataset_accessor() andregister_dataarray_accessor() for registering custom xarrayextensions without subclassing. They are described in the new documentationpage on xarray Internals. By Stephan Hoyer.

  • Round trip boolean datatypes. Previously, writing boolean datatypes to netCDFformats would raise an error since netCDF does not have a bool datatype.This feature reads/writes a dtype attribute to boolean variables in netCDFfiles. By Joe Hamman.

  • 2D plotting methods now have two new keywords (cbar_ax and cbar_kwargs),allowing more control on the colorbar (GH872).By Fabien Maussion.

  • New Dataset method filter_by_attrs(), akin tonetCDF4.Dataset.get_variables_by_attributes, to easily filterdata variables using its attributes.Filipe Fernandes.

Bug fixes

  • Attributes were being retained by default for some resamplingoperations when they should not. With the keep_attrs=False option, theywill no longer be retained by default. This may be backwards-incompatiblewith some scripts, but the attributes may be kept by adding thekeep_attrs=True option. ByJeremy McGibbon.

  • Concatenating xarray objects along an axis with a MultiIndex or PeriodIndexpreserves the nature of the index (GH875). ByStephan Hoyer.

  • Fixed bug in arithmetic operations on DataArray objects whose dimensionsare numpy structured arrays or recarrays GH861, GH837. ByMaciek Swat.

  • Fix a bug where xarray.ufuncs that take two arguments would incorrectlyuse to numpy functions instead of dask.array functions (GH876). ByStephan Hoyer.

  • Support for pickling functions from xarray.ufuncs (GH901). ByStephan Hoyer.

  • Variable.copy(deep=True) no longer converts MultiIndex into a base Index(GH769). By Benoit Bovy.

  • Fixes for groupby on dimensions with a multi-index (GH867). ByStephan Hoyer.

  • Fix printing datasets with unicode attributes on Python 2 (GH892). ByStephan Hoyer.

  • Fixed incorrect test for dask version (GH891). ByStephan Hoyer.

  • Fixed dim argument for isel_points/sel_points when a pandas.Index ispassed. By Stephan Hoyer.

  • contour() now plots the correct number of contours(GH866). By Fabien Maussion.

v0.7.2 (13 March 2016)

This release includes two new, entirely backwards compatible features andseveral bug fixes.

Enhancements

  • New DataArray method DataArray.dot() for calculating the dotproduct of two DataArrays along shared dimensions. ByDean Pospisil.

  • Rolling window operations on DataArray objects are now supported via a newDataArray.rolling() method. For example:

  1. In [15]: import xarray as xr; import numpy as np
  2.  
  3. In [16]: arr = xr.DataArray(np.arange(0, 7.5, 0.5).reshape(3, 5),
  4. dims=('x', 'y'))
  5.  
  6. In [17]: arr
  7. Out[17]:
  8. <xarray.DataArray (x: 3, y: 5)>
  9. array([[ 0. , 0.5, 1. , 1.5, 2. ],
  10. [ 2.5, 3. , 3.5, 4. , 4.5],
  11. [ 5. , 5.5, 6. , 6.5, 7. ]])
  12. Coordinates:
  13. * x (x) int64 0 1 2
  14. * y (y) int64 0 1 2 3 4
  15.  
  16. In [18]: arr.rolling(y=3, min_periods=2).mean()
  17. Out[18]:
  18. <xarray.DataArray (x: 3, y: 5)>
  19. array([[ nan, 0.25, 0.5 , 1. , 1.5 ],
  20. [ nan, 2.75, 3. , 3.5 , 4. ],
  21. [ nan, 5.25, 5.5 , 6. , 6.5 ]])
  22. Coordinates:
  23. * x (x) int64 0 1 2
  24. * y (y) int64 0 1 2 3 4

See Rolling window operations for more details. ByJoe Hamman.

Bug fixes

  • Fixed an issue where plots using pcolormesh and Cartopy axes were being distortedby the inference of the axis interval breaks. This change chooses not to modifythe coordinate variables when the axes have the attribute projection, allowingCartopy to handle the extent of pcolormesh plots (GH781). ByJoe Hamman.

  • 2D plots now better handle additional coordinates which are not DataArraydimensions (GH788). By Fabien Maussion.

v0.7.1 (16 February 2016)

This is a bug fix release that includes two small, backwards compatible enhancements.We recommend that all users upgrade.

Enhancements

  • Numerical operations now return empty objects on no overlapping labels ratherthan raising ValueError (GH739).

  • Series is now supported as valid input to the Datasetconstructor (GH740).

Bug fixes

  • Restore checks for shape consistency between data and coordinates in theDataArray constructor (GH758).

  • Single dimension variables no longer transpose as part of a broader.transpose. This behavior was causing pandas.PeriodIndex dimensionsto lose their type (GH749)

  • Dataset labels remain as their native type on .to_dataset.Previously they were coerced to strings (GH745)

  • Fixed a bug where replacing a DataArray index coordinate would improperlyalign the coordinate (GH725).

  • DataArray.reindex_like now maintains the dtype of complex numbers whenreindexing leads to NaN values (GH738).

  • Dataset.rename and DataArray.rename support the old and new namesbeing the same (GH724).

  • Fix from_dataset() for DataFrames with Categoricalcolumn and a MultiIndex index (GH737).

  • Fixes to ensure xarray works properly after the upcoming pandas v0.18 andNumPy v1.11 releases.

Acknowledgments

The following individuals contributed to this release:

  • Edward Richards

  • Maximilian Roos

  • Rafael Guedes

  • Spencer Hill

  • Stephan Hoyer

v0.7.0 (21 January 2016)

This major release includes redesign of DataArrayinternals, as well as new methods for reshaping, rolling and shiftingdata. It includes preliminary support for pandas.MultiIndex,as well as a number of other features and bug fixes, several of whichoffer improved compatibility with pandas.

New name

The project formerly known as “xray” is now “xarray”, pronounced “x-array”!This avoids a namespace conflict with the entire field of x-ray science. Renamingour project seemed like the right thing to do, especially because somescientists who work with actual x-rays are interested in using this project intheir work. Thanks for your understanding and patience in this transition. Youcan now find our documentation and code repository at new URLs:

To ease the transition, we have simultaneously released v0.7.0 of bothxray and xarray on the Python Package Index. These packages areidentical. For now, import xray still works, except it issues adeprecation warning. This will be the last xray release. Going forward, werecommend switching your import statements to import xarray as xr.

Breaking changes

  • The internal data model used by DataArray has beenrewritten to fix several outstanding issues (GH367, GH634,this stackoverflow report). Internally, DataArray is now implementedin terms of ._variable and ._coords attributes instead of holdingvariables in a Dataset object.

This refactor ensures that if a DataArray has thesame name as one of its coordinates, the array and the coordinate no longershare the same data.

In practice, this means that creating a DataArray with the same name asone of its dimensions no longer automatically uses that array to label thecorresponding coordinate. You will now need to provide coordinate labelsexplicitly. Here’s the old behavior:

  1. In [19]: xray.DataArray([4, 5, 6], dims='x', name='x')
  2. Out[19]:
  3. <xray.DataArray 'x' (x: 3)>
  4. array([4, 5, 6])
  5. Coordinates:
  6. * x (x) int64 4 5 6

and the new behavior (compare the values of the x coordinate):

  1. In [20]: xray.DataArray([4, 5, 6], dims='x', name='x')
  2. Out[20]:
  3. <xray.DataArray 'x' (x: 3)>
  4. array([4, 5, 6])
  5. Coordinates:
  6. * x (x) int64 0 1 2
  • It is no longer possible to convert a DataArray to a Dataset withxray.DataArray.to_dataset() if it is unnamed. This will nowraise ValueError. If the array is unnamed, you need to supply thename argument.

Enhancements

  1. In [21]: df = pd.DataFrame({'foo': range(3),
  2. ....: 'x': ['a', 'b', 'b'],
  3. ....: 'y': [0, 0, 1]})
  4. ....:
  5.  
  6. In [22]: s = df.set_index(['x', 'y'])['foo']
  7.  
  8. In [23]: arr = xray.DataArray(s, dims='z')
  9.  
  10. In [24]: arr
  11. Out[24]:
  12. <xray.DataArray 'foo' (z: 3)>
  13. array([0, 1, 2])
  14. Coordinates:
  15. * z (z) object ('a', 0) ('b', 0) ('b', 1)
  16.  
  17. In [25]: arr.indexes['z']
  18. Out[25]:
  19. MultiIndex(levels=[[u'a', u'b'], [0, 1]],
  20. labels=[[0, 1, 1], [0, 0, 1]],
  21. names=[u'x', u'y'])
  22.  
  23. In [26]: arr.unstack('z')
  24. Out[26]:
  25. <xray.DataArray 'foo' (x: 2, y: 2)>
  26. array([[ 0., nan],
  27. [ 1., 2.]])
  28. Coordinates:
  29. * x (x) object 'a' 'b'
  30. * y (y) int64 0 1
  31.  
  32. In [27]: arr.unstack('z').stack(z=('x', 'y'))
  33. Out[27]:
  34. <xray.DataArray 'foo' (z: 4)>
  35. array([ 0., nan, 1., 2.])
  36. Coordinates:
  37. * z (z) object ('a', 0) ('a', 1) ('b', 0) ('b', 1)

See Stack and unstack for more details.

Warning

xray’s MultiIndex support is still experimental, and we have a long to-do list of desired additions (GH719), including better display ofmulti-index levels when printing a Dataset, and support for savingdatasets with a MultiIndex to a netCDF file. User contributions in thisarea would be greatly appreciated.

  • Support for reading GRIB, HDF4 and other file formats via PyNIO. SeeFormats supported by PyNIO for more details.

  • Better error message when a variable is supplied with the same name asone of its dimensions.

  • Plotting: more control on colormap parameters (GH642). vmin andvmax will not be silently ignored anymore. Setting center=Falseprevents automatic selection of a divergent colormap.

  • New shift() and roll() methodsfor shifting/rotating datasets or arrays along a dimension:

  1. In [28]: array = xray.DataArray([5, 6, 7, 8], dims='x')
  2.  
  3. In [29]: array.shift(x=2)
  4. Out[29]:
  5. <xarray.DataArray (x: 4)>
  6. array([nan, nan, 5., 6.])
  7. Dimensions without coordinates: x
  8.  
  9. In [30]: array.roll(x=2)
  10. Out[30]:
  11. <xarray.DataArray (x: 4)>
  12. array([7, 8, 5, 6])
  13. Dimensions without coordinates: x

Notice that shift moves data independently of coordinates, but rollmoves both data and coordinates.

  • Assigning a pandas object directly as a Dataset variable is now permitted. Itsindex names correspond to the dims of the Dataset, and its data is aligned.

  • Passing a pandas.DataFrame or pandas.Panel to a Dataset constructoris now permitted.

  • New function broadcast() for explicitly broadcastingDataArray and Dataset objects against each other. For example:

  1. In [31]: a = xray.DataArray([1, 2, 3], dims='x')
  2.  
  3. In [32]: b = xray.DataArray([5, 6], dims='y')
  4.  
  5. In [33]: a
  6. Out[33]:
  7. <xarray.DataArray (x: 3)>
  8. array([1, 2, 3])
  9. Dimensions without coordinates: x
  10.  
  11. In [34]: b
  12. Out[34]:
  13. <xarray.DataArray (y: 2)>
  14. array([5, 6])
  15. Dimensions without coordinates: y
  16.  
  17. In [35]: a2, b2 = xray.broadcast(a, b)
  18.  
  19. In [36]: a2
  20. Out[36]:
  21. <xarray.DataArray (x: 3, y: 2)>
  22. array([[1, 1],
  23. [2, 2],
  24. [3, 3]])
  25. Dimensions without coordinates: x, y
  26.  
  27. In [37]: b2
  28. Out[37]:
  29. <xarray.DataArray (x: 3, y: 2)>
  30. array([[5, 6],
  31. [5, 6],
  32. [5, 6]])
  33. Dimensions without coordinates: x, y

Bug fixes

  • Fixes for several issues found on DataArray objects with the same nameas one of their coordinates (see Breaking changes for more details).

  • DataArray.to_masked_array always returns masked array with mask being anarray (not a scalar value) (GH684)

  • Allows for (imperfect) repr of Coords when underlying index is PeriodIndex (GH645).

  • Fixes for several issues found on DataArray objects with the same nameas one of their coordinates (see Breaking changes for more details).

  • Attempting to assign a Dataset or DataArray variable/attribute usingattribute-style syntax (e.g., ds.foo = 42) now raises an error ratherthan silently failing (GH656, GH714).

  • You can now pass pandas objects with non-numpy dtypes (e.g., categoricalor datetime64 with a timezone) into xray without an error(GH716).

Acknowledgments

The following individuals contributed to this release:

  • Antony Lee

  • Fabien Maussion

  • Joe Hamman

  • Maximilian Roos

  • Stephan Hoyer

  • Takeshi Kanmae

  • femtotrader

v0.6.1 (21 October 2015)

This release contains a number of bug and compatibility fixes, as wellas enhancements to plotting, indexing and writing files to disk.

Note that the minimum required version of dask for use with xray is nowversion 0.6.

API Changes

  • The handling of colormaps and discrete color lists for 2D plots inplot() was changed to provide more compatibilitywith matplotlib’s contour and contourf functions (GH538).Now discrete lists of colors should be specified using colors keyword,rather than cmap.

Enhancements

  • Faceted plotting through FacetGrid and theplot() method. See Faceting for more detailsand examples.

  • sel() and reindex() now supportthe tolerance argument for controlling nearest-neighbor selection(GH629):

  1. In [38]: array = xray.DataArray([1, 2, 3], dims='x')
  2.  
  3. In [39]: array.reindex(x=[0.9, 1.5], method='nearest', tolerance=0.2)
  4. Out[39]:
  5. <xray.DataArray (x: 2)>
  6. array([ 2., nan])
  7. Coordinates:
  8. * x (x) float64 0.9 1.5

This feature requires pandas v0.17 or newer.

  • New encoding argument in to_netcdf() for writingnetCDF files with compression, as described in the new documentationsection on Writing encoded data.

  • Add real and imagattributes to Dataset and DataArray (GH553).

  • More informative error message with from_dataframe()if the frame has duplicate columns.

  • xray now uses deterministic names for dask arrays it creates or opens fromdisk. This allows xray users to take advantage of dask’s nascent support forcaching intermediate computation results. See GH555 for an example.

Bug fixes

  • Forwards compatibility with the latest pandas release (v0.17.0). We wereusing some internal pandas routines for datetime conversion, whichunfortunately have now changed upstream (GH569).

  • Aggregation functions now correctly skip NaN for data for complex128dtype (GH554).

  • Fixed indexing 0d arrays with unicode dtype (GH568).

  • name() and Dataset keys must be a string or None tobe written to netCDF (GH533).

  • where() now uses dask instead of numpy if either thearray or other is a dask array. Previously, if other was a numpy arraythe method was evaluated eagerly.

  • Global attributes are now handled more consistently when loading remotedatasets using engine='pydap' (GH574).

  • It is now possible to assign to the .data attribute of DataArray objects.

  • coordinates attribute is now kept in the encoding dictionary afterdecoding (GH610).

  • Compatibility with numpy 1.10 (GH617).

Acknowledgments

The following individuals contributed to this release:

  • Ryan Abernathey

  • Pete Cable

  • Clark Fitzgerald

  • Joe Hamman

  • Stephan Hoyer

  • Scott Sinclair

v0.6.0 (21 August 2015)

This release includes numerous bug fixes and enhancements. Highlightsinclude the introduction of a plotting module and the new Dataset and DataArraymethods isel_points(), sel_points(),where() and diff(). There are nobreaking changes from v0.5.2.

Enhancements

  • Plotting methods have been implemented on DataArray objectsplot() through integration with matplotlib(GH185). For an introduction, see Plotting.

  • Variables in netCDF files with multiple missing values are now decoded as NaNafter issuing a warning if open_dataset is called with mask_and_scale=True.

  • We clarified our rules for when the result from an xray operation is a copyvs. a view (see copies vs views for more details).

  • Dataset variables are now written to netCDF files in order of appearancewhen using the netcdf4 backend (GH479).

  • Added isel_points() and sel_points()to support pointwise indexing of Datasets and DataArrays (GH475).

  1. In [40]: da = xray.DataArray(np.arange(56).reshape((7, 8)),
  2. ....: coords={'x': list('abcdefg'),
  3. ....: 'y': 10 * np.arange(8)},
  4. ....: dims=['x', 'y'])
  5. ....:
  6.  
  7. In [41]: da
  8. Out[41]:
  9. <xray.DataArray (x: 7, y: 8)>
  10. array([[ 0, 1, 2, 3, 4, 5, 6, 7],
  11. [ 8, 9, 10, 11, 12, 13, 14, 15],
  12. [16, 17, 18, 19, 20, 21, 22, 23],
  13. [24, 25, 26, 27, 28, 29, 30, 31],
  14. [32, 33, 34, 35, 36, 37, 38, 39],
  15. [40, 41, 42, 43, 44, 45, 46, 47],
  16. [48, 49, 50, 51, 52, 53, 54, 55]])
  17. Coordinates:
  18. * y (y) int64 0 10 20 30 40 50 60 70
  19. * x (x) |S1 'a' 'b' 'c' 'd' 'e' 'f' 'g'
  20.  
  21. # we can index by position along each dimension
  22. In [42]: da.isel_points(x=[0, 1, 6], y=[0, 1, 0], dim='points')
  23. Out[42]:
  24. <xray.DataArray (points: 3)>
  25. array([ 0, 9, 48])
  26. Coordinates:
  27. y (points) int64 0 10 0
  28. x (points) |S1 'a' 'b' 'g'
  29. * points (points) int64 0 1 2
  30.  
  31. # or equivalently by label
  32. In [43]: da.sel_points(x=['a', 'b', 'g'], y=[0, 10, 0], dim='points')
  33. Out[43]:
  34. <xray.DataArray (points: 3)>
  35. array([ 0, 9, 48])
  36. Coordinates:
  37. y (points) int64 0 10 0
  38. x (points) |S1 'a' 'b' 'g'
  39. * points (points) int64 0 1 2
  • New where() method for masking xray objects accordingto some criteria. This works particularly well with multi-dimensional data:
  1. In [44]: ds = xray.Dataset(coords={'x': range(100), 'y': range(100)})
  2.  
  3. In [45]: ds['distance'] = np.sqrt(ds.x ** 2 + ds.y ** 2)
  4.  
  5. In [46]: ds.distance.where(ds.distance < 100).plot()
  6. Out[46]: <matplotlib.collections.QuadMesh at 0x7f34256a3278>

_images/where_example.png

  • Added new methods DataArray.diffand Dataset.diff for finitedifference calculations along a given axis.

  • New to_masked_array() convenience method forreturning a numpy.ma.MaskedArray.

  1. In [47]: da = xray.DataArray(np.random.random_sample(size=(5, 4)))
  2.  
  3. In [48]: da.where(da < 0.5)
  4. Out[48]:
  5. <xarray.DataArray (dim_0: 5, dim_1: 4)>
  6. array([[0.12697 , nan, 0.260476, nan],
  7. [0.37675 , 0.336222, 0.451376, nan],
  8. [0.123102, nan, 0.373012, 0.447997],
  9. [0.129441, nan, nan, 0.352054],
  10. [0.228887, nan, nan, 0.137554]])
  11. Dimensions without coordinates: dim_0, dim_1
  12.  
  13. In [49]: da.where(da < 0.5).to_masked_array(copy=True)
  14. Out[49]:
  15. masked_array(
  16. data=[[0.12696983303810094, --, 0.26047600586578334, --],
  17. [0.37674971618967135, 0.33622174433445307, 0.45137647047539964, --],
  18. [0.12310214428849964, --, 0.37301222522143085, 0.4479968246859435],
  19. [0.12944067971751294, --, --, 0.35205353914802473],
  20. [0.2288873043216132, --, --, 0.1375535565632705]],
  21. mask=[[False, True, False, True],
  22. [False, False, False, True],
  23. [False, True, False, False],
  24. [False, True, True, False],
  25. [False, True, True, False]],
  26. fill_value=1e+20)
  • Added new flag “drop_variables” to open_dataset() forexcluding variables from being parsed. This may be useful to dropvariables with problems or inconsistent values.

Bug fixes

  • Fixed aggregation functions (e.g., sum and mean) on big-endian arrays whenbottleneck is installed (GH489).

  • Dataset aggregation functions dropped variables with unsigned integer dtype(GH505).

  • .any() and .all() were not lazy when used on xray objects containingdask arrays.

  • Fixed an error when attempting to saving datetime64 variables to netCDFfiles when the first element is NaT (GH528).

  • Fix pickle on DataArray objects (GH515).

  • Fixed unnecessary coercion of float64 to float32 when using netcdf3 andnetcdf4_classic formats (GH526).

v0.5.2 (16 July 2015)

This release contains bug fixes, several additional options for opening andsaving netCDF files, and a backwards incompatible rewrite of the advancedoptions for xray.concat.

Backwards incompatible changes

  • The optional arguments concat_over and mode in concat() havebeen removed and replaced by data_vars and coords. The new arguments are bothmore easily understood and more robustly implemented, and allowed us to fix a bugwhere concat accidentally loaded data into memory. If you set values forthese optional arguments manually, you will need to update your code. The defaultbehavior should be unchanged.

Enhancements

  • open_mfdataset() now supports a preprocess argument forpreprocessing datasets prior to concatenaton. This is useful if datasetscannot be otherwise merged automatically, e.g., if the original datasetshave conflicting index coordinates (GH443).

  • open_dataset() and open_mfdataset() now use aglobal thread lock by default for reading from netCDF files with dask. Thisavoids possible segmentation faults for reading from netCDF4 files when HDF5is not configured properly for concurrent access (GH444).

  • Added support for serializing arrays of complex numbers with engine=’h5netcdf’.

  • The new save_mfdataset() function allows for saving multipledatasets to disk simultaneously. This is useful when processing large datasetswith dask.array. For example, to save a dataset too big to fit into memoryto one file per year, we could write:

  1. In [50]: years, datasets = zip(*ds.groupby('time.year'))
  2.  
  3. In [51]: paths = ['%s.nc' % y for y in years]
  4.  
  5. In [52]: xray.save_mfdataset(datasets, paths)

Bug fixes

  • Fixed min, max, argmin and argmax for arrays with string orunicode types (GH453).

  • open_dataset() and open_mfdataset() supportsupplying chunks as a single integer.

  • Fixed a bug in serializing scalar datetime variable to netCDF.

  • Fixed a bug that could occur in serialization of 0-dimensional integer arrays.

  • Fixed a bug where concatenating DataArrays was not always lazy (GH464).

  • When reading datasets with h5netcdf, bytes attributes are decoded to strings.This allows conventions decoding to work properly on Python 3 (GH451).

v0.5.1 (15 June 2015)

This minor release fixes a few bugs and an inconsistency with pandas. It alsoadds the pipe method, copied from pandas.

Enhancements

  • Added pipe(), replicating the new pandas method in version0.16.2. See Transforming datasets for more details.

  • assign() and assign_coords()now assign new variables in sorted (alphabetical) order, mirroring thebehavior in pandas. Previously, the order was arbitrary.

Bug fixes

  • xray.concat fails in an edge case involving identical coordinate variables (GH425)

  • We now decode variables loaded from netCDF3 files with the scipy engine using nativeendianness (GH416). This resolves an issue when aggregating these arrays withbottleneck installed.

v0.5 (1 June 2015)

Highlights

The headline feature in this release is experimental support for out-of-corecomputing (data that doesn’t fit into memory) with dask. This includes a newtop-level function open_mfdataset() that makes it easy to opena collection of netCDF (using dask) as a single xray.Dataset object. Formore on dask, read the blog post introducing xray + dask and the newdocumentation section Parallel computing with Dask.

Dask makes it possible to harness parallelism and manipulate gigantic datasetswith xray. It is currently an optional dependency, but it may become requiredin the future.

Backwards incompatible changes

  • The logic used for choosing which variables are concatenated withconcat() has changed. Previously, by default any variableswhich were equal across a dimension were not concatenated. This lead to somesurprising behavior, where the behavior of groupby and concat operationscould depend on runtime values (GH268). For example:
  1. In [53]: ds = xray.Dataset({'x': 0})
  2.  
  3. In [54]: xray.concat([ds, ds], dim='y')
  4. Out[54]:
  5. <xray.Dataset>
  6. Dimensions: ()
  7. Coordinates:
  8. *empty*
  9. Data variables:
  10. x int64 0

Now, the default always concatenates data variables:

  1. In [55]: xray.concat([ds, ds], dim='y')
  2. Out[55]:
  3. <xarray.Dataset>
  4. Dimensions: (y: 2)
  5. Dimensions without coordinates: y
  6. Data variables:
  7. x (y) int64 0 0

To obtain the old behavior, supply the argument concat_over=[].

Enhancements

  • New to_array() and enhancedto_dataset() methods make it easy to switch backand forth between arrays and datasets:
  1. In [56]: ds = xray.Dataset({'a': 1, 'b': ('x', [1, 2, 3])},
  2. ....: coords={'c': 42}, attrs={'Conventions': 'None'})
  3. ....:
  4.  
  5. In [57]: ds.to_array()
  6. Out[57]:
  7. <xarray.DataArray (variable: 2, x: 3)>
  8. array([[1, 1, 1],
  9. [1, 2, 3]])
  10. Coordinates:
  11. c int64 42
  12. * variable (variable) <U1 'a' 'b'
  13. Dimensions without coordinates: x
  14. Attributes:
  15. Conventions: None
  16.  
  17. In [58]: ds.to_array().to_dataset(dim='variable')
  18. Out[58]:
  19. <xarray.Dataset>
  20. Dimensions: (x: 3)
  21. Coordinates:
  22. c int64 42
  23. Dimensions without coordinates: x
  24. Data variables:
  25. a (x) int64 1 1 1
  26. b (x) int64 1 2 3
  27. Attributes:
  28. Conventions: None
  • New fillna() method to fill missing values, modeledoff the pandas method of the same name:
  1. In [59]: array = xray.DataArray([np.nan, 1, np.nan, 3], dims='x')
  2.  
  3. In [60]: array.fillna(0)
  4. Out[60]:
  5. <xarray.DataArray (x: 4)>
  6. array([0., 1., 0., 3.])
  7. Dimensions without coordinates: x

fillna works on both Dataset and DataArray objects, and usesindex based alignment and broadcasting like standard binary operations. Italso can be applied by group, as illustrated inFill missing values with climatology.

  • New assign() and assign_coords()methods patterned off the new DataFrame.assignmethod in pandas:
  1. In [61]: ds = xray.Dataset({'y': ('x', [1, 2, 3])})
  2.  
  3. In [62]: ds.assign(z = lambda ds: ds.y ** 2)
  4. Out[62]:
  5. <xarray.Dataset>
  6. Dimensions: (x: 3)
  7. Dimensions without coordinates: x
  8. Data variables:
  9. y (x) int64 1 2 3
  10. z (x) int64 1 4 9
  11.  
  12. In [63]: ds.assign_coords(z = ('x', ['a', 'b', 'c']))
  13. Out[63]:
  14. <xarray.Dataset>
  15. Dimensions: (x: 3)
  16. Coordinates:
  17. z (x) <U1 'a' 'b' 'c'
  18. Dimensions without coordinates: x
  19. Data variables:
  20. y (x) int64 1 2 3

These methods return a new Dataset (or DataArray) with updated data orcoordinate variables.

  • sel() now supports the method parameter, which workslike the paramter of the same name on reindex(). Itprovides a simple interface for doing nearest-neighbor interpolation:
  1. In [64]: ds.sel(x=1.1, method='nearest')
  2. Out[64]:
  3. <xray.Dataset>
  4. Dimensions: ()
  5. Coordinates:
  6. x int64 1
  7. Data variables:
  8. y int64 2
  9.  
  10. In [65]: ds.sel(x=[1.1, 2.1], method='pad')
  11. Out[65]:
  12. <xray.Dataset>
  13. Dimensions: (x: 2)
  14. Coordinates:
  15. * x (x) int64 1 2
  16. Data variables:
  17. y (x) int64 2 3

See Nearest neighbor lookups for more details.

  • You can now control the underlying backend used for accessing remotedatasets (via OPeNDAP) by specifying engine='netcdf4' orengine='pydap'.

  • xray now provides experimental support for reading and writing netCDF4 files directlyvia h5py with the h5netcdf package, avoiding the netCDF4-Python package. Youwill need to install h5netcdf and specify engine='h5netcdf' to try thisfeature.

  • Accessing data from remote datasets now has retrying logic (with exponentialbackoff) that should make it robust to occasional bad responses from DAPservers.

  • You can control the width of the Dataset repr with xray.set_options.It can be used either as a context manager, in which case the default is restoredoutside the context:

  1. In [66]: ds = xray.Dataset({'x': np.arange(1000)})
  2.  
  3. In [67]: with xray.set_options(display_width=40):
  4. ....: print(ds)
  5. ....:
  6. <xarray.Dataset>
  7. Dimensions: (x: 1000)
  8. Coordinates:
  9. * x (x) int64 0 1 2 ... 998 999
  10. Data variables:
  11. *empty*

Or to set a global option:

  1. In [68]: xray.set_options(display_width=80)

The default value for the display_width option is 80.

Deprecations

  • The method load_data() has been renamed to the more succinctload().

v0.4.1 (18 March 2015)

The release contains bug fixes and several new features. All changes should befully backwards compatible.

Enhancements

  1. In [69]: time = pd.date_range('2000-01-01', freq='6H', periods=10)
  2.  
  3. In [70]: array = xray.DataArray(np.arange(10), [('time', time)])
  4.  
  5. In [71]: array.resample('1D', dim='time')

You can specify how to do the resampling with the how argument and otheroptions such as closed and label let you control labeling:

  1. In [72]: array.resample('1D', dim='time', how='sum', label='right')

If the desired temporal resolution is higher than the original data(upsampling), xray will insert missing values:

  1. In [73]: array.resample('3H', 'time')
  • first and last methods on groupby objects let you take the first orlast examples from each group along the grouped axis:
  1. In [74]: array.groupby('time.day').first()

These methods combine well with resample:

  1. In [75]: array.resample('1D', dim='time', how='first')
  • swap_dims() allows for easily swapping one dimensionout for another:
  1. In [76]: ds = xray.Dataset({'x': range(3), 'y': ('x', list('abc'))})
  2.  
  3. In [77]: ds
  4. Out[77]:
  5. <xarray.Dataset>
  6. Dimensions: (x: 3)
  7. Coordinates:
  8. * x (x) int64 0 1 2
  9. Data variables:
  10. y (x) <U1 'a' 'b' 'c'
  11.  
  12. In [78]: ds.swap_dims({'x': 'y'})
  13. Out[78]:
  14. <xarray.Dataset>
  15. Dimensions: (y: 3)
  16. Coordinates:
  17. x (y) int64 0 1 2
  18. * y (y) <U1 'a' 'b' 'c'
  19. Data variables:
  20. *empty*

This was possible in earlier versions of xray, but required some contortions.

  • open_dataset() and to_netcdf() nowaccept an engine argument to explicitly select which underlying library(netcdf4 or scipy) is used for reading/writing a netCDF file.

Bug fixes

  • Fixed a bug where data netCDF variables read from disk withengine='scipy' could still be associated with the file on disk, evenafter closing the file (GH341). This manifested itself in warningsabout mmapped arrays and segmentation faults (if the data was accessed).

  • Silenced spurious warnings about all-NaN slices when using nan-awareaggregation methods (GH344).

  • Dataset aggregations with keep_attrs=True now preserve attributes ondata variables, not just the dataset itself.

  • Tests for xray now pass when run on Windows (GH360).

  • Fixed a regression in v0.4 where saving to netCDF could fail with the errorValueError: could not automatically determine time units.

v0.4 (2 March, 2015)

This is one of the biggest releases yet for xray: it includes some majorchanges that may break existing code, along with the usual collection of minorenhancements and bug fixes. On the plus side, this release includes allhitherto planned breaking changes, so the upgrade path for xray should besmoother going forward.

Breaking changes

  • We now automatically align index labels in arithmetic, dataset construction,merging and updating. This means the need for manually invoking methods likealign() and reindex_like() should bevastly reduced.

For arithmetic, we alignbased on the intersection of labels:

  1. In [79]: lhs = xray.DataArray([1, 2, 3], [('x', [0, 1, 2])])
  2.  
  3. In [80]: rhs = xray.DataArray([2, 3, 4], [('x', [1, 2, 3])])
  4.  
  5. In [81]: lhs + rhs
  6. Out[81]:
  7. <xarray.DataArray (x: 2)>
  8. array([4, 6])
  9. Coordinates:
  10. * x (x) int64 1 2

For dataset construction and merging, we align based on theunion of labels:

  1. In [82]: xray.Dataset({'foo': lhs, 'bar': rhs})
  2. Out[82]:
  3. <xarray.Dataset>
  4. Dimensions: (x: 4)
  5. Coordinates:
  6. * x (x) int64 0 1 2 3
  7. Data variables:
  8. foo (x) float64 1.0 2.0 3.0 nan
  9. bar (x) float64 nan 2.0 3.0 4.0

For update and setitem, we align based on the originalobject:

  1. In [83]: lhs.coords['rhs'] = rhs
  2.  
  3. In [84]: lhs
  4. Out[84]:
  5. <xarray.DataArray (x: 3)>
  6. array([1, 2, 3])
  7. Coordinates:
  8. * x (x) int64 0 1 2
  9. rhs (x) float64 nan 2.0 3.0
  • Aggregations like mean or median now skip missing values by default:
  1. In [85]: xray.DataArray([1, 2, np.nan, 3]).mean()
  2. Out[85]:
  3. <xarray.DataArray ()>
  4. array(2.)

You can turn this behavior off by supplying the keyword arugmentskipna=False.

These operations are lightning fast thanks to integration with bottleneck,which is a new optional dependency for xray (numpy is used if bottleneck isnot installed).

  • Scalar coordinates no longer conflict with constant arrays with the samevalue (e.g., in arithmetic, merging datasets and concat), even if they havedifferent shape (GH243). For example, the coordinate c herepersists through arithmetic, even though it has different shapes on eachDataArray:
  1. In [86]: a = xray.DataArray([1, 2], coords={'c': 0}, dims='x')
  2.  
  3. In [87]: b = xray.DataArray([1, 2], coords={'c': ('x', [0, 0])}, dims='x')
  4.  
  5. In [88]: (a + b).coords
  6. Out[88]:
  7. Coordinates:
  8. c (x) int64 0 0

This functionality can be controlled through the compat option, whichhas also been added to the Dataset constructor.

  • Datetime shortcuts such as 'time.month' now return a DataArray withthe name 'month', not 'time.month' (GH345). This makes iteasier to index the resulting arrays when they are used with groupby:
  1. In [89]: time = xray.DataArray(pd.date_range('2000-01-01', periods=365),
  2. ....: dims='time', name='time')
  3. ....:
  4.  
  5. In [90]: counts = time.groupby('time.month').count()
  6.  
  7. In [91]: counts.sel(month=2)
  8. Out[91]:
  9. <xarray.DataArray 'time' ()>
  10. array(29)
  11. Coordinates:
  12. month int64 2

Previously, you would need to use something likecounts.sel(**{'time.month': 2}}), which is much more awkward.

  • The season datetime shortcut now returns an array of string labelssuch ‘DJF’:
  1. In [92]: ds = xray.Dataset({'t': pd.date_range('2000-01-01', periods=12, freq='M')})
  2.  
  3. In [93]: ds['t.season']
  4. Out[93]:
  5. <xarray.DataArray 'season' (t: 12)>
  6. array(['DJF', 'DJF', 'MAM', 'MAM', 'MAM', 'JJA', 'JJA', 'JJA', 'SON', 'SON',
  7. 'SON', 'DJF'], dtype='<U3')
  8. Coordinates:
  9. * t (t) datetime64[ns] 2000-01-31 2000-02-29 ... 2000-11-30 2000-12-31

Previously, it returned numbered seasons 1 through 4.

  • We have updated our use of the terms of “coordinates” and “variables”. Whatwere known in previous versions of xray as “coordinates” and “variables” arenow referred to throughout the documentation as “coordinate variables” and“data variables”. This brings xray in closer alignment to CF Conventions_.The only visible change besides the documentation is that Dataset.varshas been renamed Dataset.data_vars.

  • You will need to update your code if you have been ignoring deprecationwarnings: methods and attributes that were deprecated in xray v0.3 or earlier(e.g., dimensions, attributes`) have gone away.

Enhancements

  • Support for reindex() with a fill method. Thisprovides a useful shortcut for upsampling:
  1. In [94]: data = xray.DataArray([1, 2, 3], [('x', range(3))])
  2.  
  3. In [95]: data.reindex(x=[0.5, 1, 1.5, 2, 2.5], method='pad')
  4. Out[95]:
  5. <xarray.DataArray (x: 5)>
  6. array([1, 2, 2, 3, 3])
  7. Coordinates:
  8. * x (x) float64 0.5 1.0 1.5 2.0 2.5

This will be especially useful once pandas 0.16 is released, at which pointxray will immediately support reindexing withmethod=’nearest’.

  • Use functions that return generic ndarrays with DataArray.groupby.apply andDataset.apply (GH327 and GH329). Thanks Jeff Gerard!

  • Consolidated the functionality of dumps (writing a dataset to a netCDF3bytestring) into to_netcdf() (GH333).

  • to_netcdf() now supports writing to groups in netCDF4files (GH333). It also finally has a full docstring – you should readit!

  • open_dataset() and to_netcdf() nowwork on netCDF3 files when netcdf4-python is not installed as long as scipyis available (GH333).

  • The new Dataset.drop andDataArray.drop methods makes it easy to dropexplicitly listed variables or index labels:

  1. # drop variables
  2. In [96]: ds = xray.Dataset({'x': 0, 'y': 1})
  3.  
  4. In [97]: ds.drop('x')
  5. Out[97]:
  6. <xarray.Dataset>
  7. Dimensions: ()
  8. Data variables:
  9. y int64 1
  10.  
  11. # drop index labels
  12. In [98]: arr = xray.DataArray([1, 2, 3], coords=[('x', list('abc'))])
  13.  
  14. In [99]: arr.drop(['a', 'c'], dim='x')
  15. Out[99]:
  16. <xarray.DataArray (x: 1)>
  17. array([2])
  18. Coordinates:
  19. * x (x) <U1 'b'
  • broadcast_equals() has been added to correspond tothe new compat option.

  • Long attributes are now truncated at 500 characters when printing a dataset(GH338). This should make things more convenient for working withdatasets interactively.

  • Added a new documentation example, Calculating Seasonal Averages from Timeseries of Monthly Means. Thanks JoeHamman!

Bug fixes

  • Several bug fixes related to decoding time units from netCDF files(GH316, GH330). Thanks Stefan Pfenninger!

  • xray no longer requires decode_coords=False when reading datasets withunparseable coordinate attributes (GH308).

  • Fixed DataArray.loc indexing with (GH318).

  • Fixed an edge case that resulting in an error when reindexingmulti-dimensional variables (GH315).

  • Slicing with negative step sizes (GH312).

  • Invalid conversion of string arrays to numeric dtype (GH305).

  • Fixedrepr() on dataset objects with non-standard dates (GH347).

Deprecations

  • dump and dumps have been deprecated in favor ofto_netcdf().

  • drop_vars has been deprecated in favor of drop().

Future plans

The biggest feature I’m excited about working toward in the immediate futureis supporting out-of-core operations in xray using Dask, a part of the Blazeproject. For a preview of using Dask with weather data, readthis blog post by Matthew Rocklin. See GH328 for more details.

v0.3.2 (23 December, 2014)

This release focused on bug-fixes, speedups and resolving some nigglinginconsistencies.

There are a few cases where the behavior of xray differs from the previousversion. However, I expect that in almost all cases your code will continue torun unmodified.

Warning

xray now requires pandas v0.15.0 or later. This was necessary forsupporting TimedeltaIndex without too many painful hacks.

Backwards incompatible changes

  • Arrays of datetime.datetime objects are now automatically cast todatetime64[ns] arrays when stored in an xray object, using machineryborrowed from pandas:
  1. In [100]: from datetime import datetime
  2.  
  3. In [101]: xray.Dataset({'t': [datetime(2000, 1, 1)]})
  4. Out[101]:
  5. <xarray.Dataset>
  6. Dimensions: (t: 1)
  7. Coordinates:
  8. * t (t) datetime64[ns] 2000-01-01
  9. Data variables:
  10. *empty*
  • xray now has support (including serialization to netCDF) forTimedeltaIndex. datetime.timedelta objectsare thus accordingly cast to timedelta64[ns] objects when appropriate.

  • Masked arrays are now properly coerced to use NaN as a sentinel value(GH259).

Enhancements

  • Due to popular demand, we have added experimental attribute style access asa shortcut for dataset variables, coordinates and attributes:
  1. In [102]: ds = xray.Dataset({'tmin': ([], 25, {'units': 'celcius'})})
  2.  
  3. In [103]: ds.tmin.units
  4. Out[103]: 'celcius'

Tab-completion for these variables should work in editors such as IPython.However, setting variables or attributes in this fashion is not yetsupported because there are some unresolved ambiguities (GH300).

  • You can now use a dictionary for indexing with labeled dimensions. Thisprovides a safe way to do assignment with labeled dimensions:
  1. In [104]: array = xray.DataArray(np.zeros(5), dims=['x'])
  2.  
  3. In [105]: array[dict(x=slice(3))] = 1
  4.  
  5. In [106]: array
  6. Out[106]:
  7. <xarray.DataArray (x: 5)>
  8. array([1., 1., 1., 0., 0.])
  9. Dimensions without coordinates: x
  • Non-index coordinates can now be faithfully written to and restored fromnetCDF files. This is done according to CF conventions when possible byusing the coordinates attribute on a data variable. When not possible,xray defines a global coordinates attribute.

  • Preliminary support for converting xray.DataArray objects to and fromCDAT cdms2 variables.

  • We sped up any operation that involves creating a new Dataset or DataArray(e.g., indexing, aggregation, arithmetic) by a factor of 30 to 50%. The fullspeed up requires cyordereddict to be installed.

Bug fixes

  • Fix for to_dataframe() with 0d string/object coordinates (GH287)

  • Fix for to_netcdf with 0d string variable (GH284)

  • Fix writing datetime64 arrays to netcdf if NaT is present (GH270)

  • Fix align silently upcasts data arrays when NaNs are inserted (GH264)

Future plans

  • I am contemplating switching to the terms “coordinate variables” and “datavariables” instead of the (currently used) “coordinates” and “variables”,following their use in CF Conventions_ (GH293). This would mostlyhave implications for the documentation, but I would also change theDataset attribute vars to data.

  • I no longer certain that automatic label alignment for arithmetic would be agood idea for xray – it is a feature from pandas that I have not missed(GH186).

  • The main API breakage that I do anticipate in the next release is finallymaking all aggregation operations skip missing values by default(GH130). I’m pretty sick of writing ds.reduce(np.nanmean, 'time').

  • The next version of xray (0.4) will remove deprecated features and aliaseswhose use currently raises a warning.

If you have opinions about any of these anticipated changes, I would love tohear them – please add a note to any of the referenced GitHub issues.

v0.3.1 (22 October, 2014)

This is mostly a bug-fix release to make xray compatible with the latestrelease of pandas (v0.15).

We added several features to better support working with missing values andexporting xray objects to pandas. We also reorganized the internal API forserializing and deserializing datasets, but this change should be almostentirely transparent to users.

Other than breaking the experimental DataStore API, there should be nobackwards incompatible changes.

New features

  • Added count() and dropna()methods, copied from pandas, for working with missing values (GH247,GH58).

  • Added DataArray.to_pandas forconverting a data array into the pandas object with the same dimensionality(1D to Series, 2D to DataFrame, etc.) (GH255).

  • Support for reading gzipped netCDF3 files (GH239).

  • Reduced memory usage when writing netCDF files (GH251).

  • ‘missing_value’ is now supported as an alias for the ‘_FillValue’ attributeon netCDF variables (GH245).

  • Trivial indexes, equivalent to range(n) where n is the length of thedimension, are no longer written to disk (GH245).

Bug fixes

  • Compatibility fixes for pandas v0.15 (GH262).

  • Fixes for display and indexing of NaT (not-a-time) (GH238,GH240)

  • Fix slicing by label was an argument is a data array (GH250).

  • Test data is now shipped with the source distribution (GH253).

  • Ensure order does not matter when doing arithmetic with scalar data arrays(GH254).

  • Order of dimensions preserved with DataArray.to_dataframe (GH260).

v0.3 (21 September 2014)

New features

  • Revamped coordinates: “coordinates” now refer to all arrays that are notused to index a dimension. Coordinates are intended to allow for keeping trackof arrays of metadata that describe the grid on which the points in “variable”arrays lie. They are preserved (when unambiguous) even though mathematicaloperations.

  • Dataset math Dataset objects now support all arithmeticoperations directly. Dataset-array operations map across all datasetvariables; dataset-dataset operations act on each pair of variables with thesame name.

  • GroupBy math: This provides a convenient shortcut for normalizing by theaverage value of a group.

  • The dataset repr method has been entirely overhauled; datasetobjects now show their values when printed.

  • You can now index a dataset with a list of variables to return a new dataset:ds[['foo', 'bar']].

Backwards incompatible changes

  • Dataset.eq and Dataset.ne are now element-wise operationsinstead of comparing all values to obtain a single boolean. Use the methodequals() instead.

Deprecations

  • Dataset.noncoords is deprecated: use Dataset.vars instead.

  • Dataset.select_vars deprecated: index a Dataset with a list ofvariable names instead.

  • DataArray.select_vars and DataArray.drop_vars deprecated: usereset_coords() instead.

v0.2 (14 August 2014)

This is major release that includes some new features and quite a few bugfixes. Here are the highlights:

  • There is now a direct constructor for DataArray objects, which makes itpossible to create a DataArray without using a Dataset. This is highlightedin the refreshed tutorial.

  • You can perform aggregation operations like mean directly onDataset objects, thanks to Joe Hamman. These aggregationmethods also worked on grouped datasets.

  • xray now works on Python 2.6, thanks to Anna Kuznetsova.

  • A number of methods and attributes were given more sensible (usually shorter)names: labeled -> sel, indexed -> isel, select ->select_vars, unselect -> drop_vars, dimensions -> dims,coordinates -> coords, attributes -> attrs.

  • New load_data() and close()methods for datasets facilitate lower level of control of data loaded fromdisk.

v0.1.1 (20 May 2014)

xray 0.1.1 is a bug-fix release that includes changes that should be almostentirely backwards compatible with v0.1:

  • Python 3 support (GH53)

  • Required numpy version relaxed to 1.7 (GH129)

  • Return numpy.datetime64 arrays for non-standard calendars (GH126)

  • Support for opening datasets associated with NetCDF4 groups (GH127)

  • Bug-fixes for concatenating datetime arrays (GH134)

Special thanks to new contributors Thomas Kluyver, Joe Hamman and AlistairMiles.

v0.1 (2 May 2014)

Initial release.