Stingray API

Library of Time Series Methods For Astronomical X-ray Data.

Classes

Lightcurve

class stingray.Lightcurve(time, counts, err=None, input_counts=True, gti=None, err_dist='poisson', mjdref=0, dt=None)[source]

Make a light curve object from an array of time stamps and an array of counts.

Parameters:

time: iterable

A list or array of time stamps for a light curve

counts: iterable, optional, default None

A list or array of the counts in each bin corresponding to the bins defined in time (note: use input_counts=False to input the count range, i.e. counts/second, otherwise use counts/bin).

err: iterable, optional, default None:

A list or array of the uncertainties in each bin corresponding to the bins defined in time (note: use input_counts=False to input the count rage, i.e. counts/second, otherwise use counts/bin). If None, we assume the data is poisson distributed and calculate the error from the average of the lower and upper 1-sigma confidence intervals for the Poissonian distribution with mean equal to counts.

input_counts: bool, optional, default True

If True, the code assumes that the input data in ‘counts’ is in units of counts/bin. If False, it assumes the data in ‘counts’ is in counts/second.

gti: 2-d float array, default None

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time Intervals. They are not applied to the data by default. They will be used by other methods to have an indication of the “safe” time intervals to use during analysis.

err_dist: str, optional, default=None

Statistic of the Lightcurve, it is used to calculate the uncertainties and other statistical values apropriately. Default makes no assumptions and keep errors equal to zero.

mjdref: float

MJD reference (useful in most high-energy mission data)

Attributes

time: numpy.ndarray The array of midpoints of time bins.
bin_lo: The array of lower time stamp of time bins.
bin_hi: The array of higher time stamp of time bins.
counts: numpy.ndarray The counts per bin corresponding to the bins in time.
counts_err: numpy.ndarray The uncertainties corresponding to counts
countrate: numpy.ndarray The counts per second in each of the bins defined in time.
countrate_err: numpy.ndarray The uncertainties corresponding to countrate
meanrate: float The mean count rate of the light curve.
meancounts: float The mean counts of the light curve.
n: int The number of data points in the light curve.
dt: float The time resolution of the light curve.
mjdref: float MJD reference date (tstart / 86400 gives the date in MJD at the start of the observation)
tseg: float The total duration of the light curve.
tstart: float The start time of the light curve.
gti: 2-d float array [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time Intervals. They indicate the “safe” time intervals to be used during the analysis of the light curve.
err_dist: string Statistic of the Lightcurve, it is used to calculate the uncertainties and other statistical values apropriately. It propagates to Spectrum classes.

Covariancespectrum

class stingray.Covariancespectrum(event_list, dt, band_interest=None, ref_band_interest=None, std=None)[source]
Parameters:

event_list : numpy 2D array

A numpy 2D array with first column as time of arrival and second column as photon energies associated. Note : The event list must be in sorted order with respect to the times of arrivals.

dt : float

The time resolution of the Lightcurve formed from the energy bin.

band_interest : iterable of tuples, default All

An iterable of tuples with minimum and maximum values of the range in the band of interest. e.g list of tuples, tuple of tuples.

ref_band_interest : tuple of reference band range, default All

A tuple with minimum and maximum values of the range in the band of interest in reference channel.

std : float or np.array or list of numbers

The term std is used to calculate the excess variance of a band. If std is set to None, default Poisson case is taken and the std is calculated as mean(lc)**0.5. In the case of a single float as input, the same is used as the standard deviation which is also used as the std. And if the std is an iterable of numbers, their mean is used for the same purpose.

Examples

See https://github.com/StingraySoftware/notebooks repository for detailed notebooks on the code.

Attributes

energy_events (dictionary) A dictionary with energy bins as keys and time of arrivals of photons with the same energy as value.
energy_covar (dictionary) A dictionary with mid point of band_interest and their covariance computed with their individual reference band. The covariance values are normalized.
unnorm_covar (np.ndarray) An array of arrays with mid point band_interest and their covariance. It is the array-form of the dictionary energy_covar. The covariance values are unnormalized.
covar (np.ndarray) Normalized covariance spectrum.
covar_error (np.ndarray) Errors of the normalized covariance spectrum.
min_time (int) Time of arrival of the earliest photon.
max_time (int) Time of arrival of the last photon.
min_energy (float) Energy of the photon with the minimum energy.
max_energy (float) Energy of the photon with the maximum energy.

AveragedCovariancespectrum

class stingray.AveragedCovariancespectrum(event_list, dt, segment_size, band_interest=None, ref_band_interest=None, std=None)[source]

Make an averaged covariance spectrum by segmenting the light curve formed, calculating covariance for each segment and then averaging the resulting covariance spectra.

Parameters:

event_list : numpy 2D array

A numpy 2D array with first column as time of arrival and second column as photon energies associated. Note : The event list must be in sorted order with respect to the times of arrivals.

dt : float

The time resolution of the Lightcurve formed from the energy bin.

segment_size : float

The size of each segment to average. Note that if the total duration of each Lightcurve object formed is not an integer multiple of the segment_size, then any fraction left-over at the end of the time series will be lost.

band_interest : iterable of tuples, default All

An iterable of tuples with minimum and maximum values of the range in the band of interest. e.g list of tuples, tuple of tuples.

ref_band_interest : tuple of reference band range, default All

A tuple with minimum and maximum values of the range in the band of interest in reference channel.

std : float or np.array or list of numbers

The term std is used to calculate the excess variance of a band. If std is set to None, default Poisson case is taken and the std is calculated as mean(lc)**0.5. In the case of a single float as input, the same is used as the standard deviation which is also used as the std. And if the std is an iterable of numbers, their mean is used for the same purpose.

Attributes

energy_events (dictionary) A dictionary with energy bins as keys and time of arrivals of photons with the same energy as value.
energy_covar (dictionary) A dictionary with mid point of band_interest and their covariance computed with their individual reference band. The covariance values are normalized.
unnorm_covar (np.ndarray) An array of arrays with mid point band_interest and their covariance. It is the array-form of the dictionary energy_covar. The covariance values are unnormalized.
covar (np.ndarray) Normalized covariance spectrum.
covar_error (np.ndarray) Errors of the normalized covariance spectrum.
min_time (int) Time of arrival of the earliest photon.
max_time (int) Time of arrival of the last photon.
min_energy (float) Energy of the photon with the minimum energy.
max_energy (float) Energy of the photon with the maximum energy.

Crossspectrum

class stingray.Crossspectrum(lc1=None, lc2=None, norm='none', gti=None)[source]

Make a cross spectrum from a (binned) light curve. You can also make an empty Crossspectrum object to populate with your own fourier-transformed data (this can sometimes be useful when making binned periodograms).

Parameters:

lc1: lightcurve.Lightcurve object, optional, default None

The first light curve data for the channel/band of interest.

lc2: lightcurve.Lightcurve object, optional, default None

The light curve data for the reference band.

norm: {‘frac’, ‘abs’, ‘leahy’, ‘none’}, default ‘none’

The normalization of the (real part of the) cross spectrum.

Other Parameters:
 

gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] – Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care!

Attributes

freq: numpy.ndarray The array of mid-bin frequencies that the Fourier transform samples
power: numpy.ndarray The array of cross spectra (complex numbers)
power_err: numpy.ndarray The uncertainties of power. An approximation for each bin given by “power_err= power/Sqrt(m)”. Where m is the number of power averaged in each bin (by frequency binning, or averaging more than one spectra). Note that for a single realization (m=1) the error is equal to the power.
df: float The frequency resolution
m: int The number of averaged cross-spectra amplitudes in each bin.
n: int The number of data points/time bins in one segment of the light curves.
nphots1: float The total number of photons in light curve 1
nphots2: float The total number of photons in light curve 2

AveragedCrossspectrum

class stingray.AveragedCrossspectrum(lc1=None, lc2=None, segment_size=None, norm='none', gti=None)[source]

Make an averaged cross spectrum from a light curve by segmenting two light curves, Fourier-transforming each segment and then averaging the resulting cross spectra.

Parameters:

lc1: lightcurve.Lightcurve object OR

iterable of lightcurve.Lightcurve objects One light curve data to be Fourier-transformed. This is the band of interest or channel of interest.

lc2: lightcurve.Lightcurve object OR

iterable of lightcurve.Lightcurve objects Second light curve data to be Fourier-transformed. This is the reference band.

segment_size: float

The size of each segment to average. Note that if the total duration of each Lightcurve object in lc1 or lc2 is not an integer multiple of the segment_size, then any fraction left-over at the end of the time series will be lost. Otherwise you introduce artefacts.

norm: {‘frac’, ‘abs’, ‘leahy’, ‘none’}, default ‘none’

The normalization of the (real part of the) cross spectrum.

Other Parameters:
 

gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] – Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care!

Attributes

freq: numpy.ndarray The array of mid-bin frequencies that the Fourier transform samples
power: numpy.ndarray The array of cross spectra
power_err: numpy.ndarray The uncertainties of power. An approximation for each bin given by “power_err= power/Sqrt(m)”. Where m is the number of power averaged in each bin (by frequency binning, or averaging powerspectrum). Note that for a single realization (m=1) the error is equal to the power.
df: float The frequency resolution
m: int The number of averaged cross spectra
n: int The number of time bins per segment of light curve?
nphots1: float The total number of photons in the first (interest) light curve
nphots2: float The total number of photons in the second (reference) light curve
gti: 2-d float array [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] – Good Time intervals. They are calculated by taking the common GTI between the two light curves

Powerspectrum

class stingray.Powerspectrum(lc=None, norm='frac', gti=None)[source]

Make a Periodogram (power spectrum) from a (binned) light curve. Periodograms can be Leahy normalized or fractional rms normalized. You can also make an empty Periodogram object to populate with your own fourier-transformed data (this can sometimes be useful when making binned periodograms).

Parameters:

lc: lightcurve.Lightcurve object, optional, default None

The light curve data to be Fourier-transformed.

norm: {“leahy” | “frac” | “abs” | “none” }, optional, default “frac”

The normaliation of the periodogram to be used. Options are “leahy”, “frac”, “abs” and “none”, default is “frac”.

Other Parameters:
 

gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] – Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care!

Attributes

norm: {“leahy” | “frac” | “abs” | “none”} the normalization of the periodogram
freq: numpy.ndarray The array of mid-bin frequencies that the Fourier transform samples
power: numpy.ndarray The array of normalized squared absolute values of Fourier amplitudes
power_err: numpy.ndarray The uncertainties of power. An approximation for each bin given by “power_err= power/Sqrt(m)”. Where m is the number of power averaged in each bin (by frequency binning, or averaging powerspectrum). Note that for a single realization (m=1) the error is equal to the power.
df: float The frequency resolution
m: int The number of averaged powers in each bin
n: int The number of data points in the light curve
nphots: float The total number of photons in the light curve

AveragedPowerspectrum

class stingray.AveragedPowerspectrum(lc=None, segment_size=None, norm='frac', gti=None)[source]

Make an averaged periodogram from a light curve by segmenting the light curve, Fourier-transforming each segment and then averaging the resulting periodograms.

Parameters:

lc: lightcurve.Lightcurve object OR

iterable of lightcurve.Lightcurve objects The light curve data to be Fourier-transformed.

segment_size: float

The size of each segment to average. Note that if the total duration of each Lightcurve object in lc is not an integer multiple of the segment_size, then any fraction left-over at the end of the time series will be lost.

norm: {“leahy” | “frac” | “abs” | “none” }, optional, default “frac”

The normaliation of the periodogram to be used. Options are “leahy”, “frac”, “abs” and “none”, default is “frac”.

Other Parameters:
 

gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] – Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care!

Attributes

norm: {“leahy” | “frac” | “abs” | “none”} the normalization of the periodogram
freq: numpy.ndarray The array of mid-bin frequencies that the Fourier transform samples
power: numpy.ndarray The array of normalized squared absolute values of Fourier amplitudes
power_err: numpy.ndarray The uncertainties of power. An approximation for each bin given by “power_err= power/Sqrt(m)”. Where m is the number of power averaged in each bin (by frequency binning, or averaging powerspectrum). Note that for a single realization (m=1) the error is equal to the power.
df: float The frequency resolution
m: int The number of averaged periodograms
n: int The number of data points in the light curve
nphots: float The total number of photons in the light curve

Functions

stingray.baseline_als(y, lam, p, niter=10)[source]

Baseline Correction with Asymmetric Least Squares Smoothing.

Modifications to the routine from Eilers & Boelens 2005 https://www.researchgate.net/publication/228961729_Technical_Report_Baseline_Correction_with_Asymmetric_Least_Squares_Smoothing The Python translation is partly from http://stackoverflow.com/questions/29156532/python-baseline-correction-library

Parameters:

y : array of floats

the “light curve”. It assumes equal spacing.

lam : float

“smoothness” parameter. Larger values make the baseline stiffer Typically 1e2 < lam < 1e9

p : float

“asymmetry” parameter. Smaller values make the baseline more “horizontal”. Typically 0.001 < p < 0.1, but not necessary.

stingray.coherence(lc1, lc2)[source]

Estimate coherence function of two light curves.

Parameters:

lc1: lightcurve.Lightcurve object

The first light curve data for the channel of interest.

lc2: lightcurve.Lightcurve object

The light curve data for reference band

Returns:

coh : np.ndarray

Coherence function

stingray.contiguous_regions(condition)[source]

Find contiguous True regions of the boolean array “condition”.

Return a 2D array where the first column is the start index of the region and the second column is the end index.

Parameters:

condition : boolean array

Returns:

idx : [[i0_0, i0_1], [i1_0, i1_1], ...]

A list of integer couples, with the start and end of each True blocks in the original array

Notes

From : http://stackoverflow.com/questions/4494404/find-large-number-of-consecutive-values- fulfilling-condition-in-a-numpy-array

stingray.create_window(N, window_type='uniform')[source]

A method to create window functions commonly used in signal processing. Windows supported are: Hamming, Hanning, uniform(rectangular window), triangular window, blackmann window among others.

Parameters:

N : int

Total number of data points in window. If negative, abs is taken.

window_type : {‘uniform’, ‘parzen’, ‘hamming’, ‘hanning’, ‘traingular’, ‘welch’, ‘blackmann’, ‘flat-top’}, optional, default ‘uniform’

Type of window to create.

Returns

——-

window: numpy.ndarray

Window function of length N.

stingray.excess_variance(lc, normalization='fvar')[source]

Calculate the excess variance.

Vaughan+03

Parameters:

lc : a Lightcurve object

normalization : str

if ‘fvar’, return normalized square-root excess variance. If ‘none’, return the unnormalized variance

Returns:

var_xs : float

var_xs_err : float

stingray.is_iterable(stuff)[source]

Test if stuff is an iterable.

stingray.is_string(s)[source]

Portable function to answer this question.

stingray.optimal_bin_time(fftlen, tbin)[source]

Vary slightly the bin time to have a power of two number of bins.

Given an FFT length and a proposed bin time, return a bin time slightly shorter than the original, that will produce a power-of-two number of FFT bins.

stingray.rebin_data(x, y, dx_new, yerr=None, method='sum', dx=None)[source]

Rebin some data to an arbitrary new data resolution. Either sum the data points in the new bins or average them.

Parameters:

x: iterable

The dependent variable with some resolution dx_old = x[1]-x[0]

y: iterable

The independent variable to be binned

dx_new: float

The new resolution of the dependent variable x

Returns:

xbin: numpy.ndarray

The midpoints of the new bins in x

ybin: numpy.ndarray

The binned quantity y

ybin_err: numpy.ndarray

The uncertainties of the binned values of y.

step_size: float

The size of the binning step

Other Parameters:
 

yerr: iterable, optional

The uncertainties of y, to be propagated during binning.

method: {“sum” | “average” | “mean”}, optional, default “sum”

The method to be used in binning. Either sum the samples y in each new bin of x, or take the arithmetic mean.

dx: float

The old resolution (otherwise, calculated from median diff)

stingray.rebin_data_log(x, y, f, y_err=None, dx=None)[source]

Logarithmic rebin of the periodogram.

The new frequency depends on the previous frequency modified by a factor f:

dnu_j = dnu_{j-1}*(1+f)

Parameters:

x: iterable

The dependent variable with some resolution dx_old = x[1]-x[0]

y: iterable

The independent variable to be binned

f: float

The factor of increase of each bin wrt the previous one.

Returns:

xbin: numpy.ndarray

The midpoints of the new bins in x

ybin: numpy.ndarray

The binned quantity y

ybin_err: numpy.ndarray

The uncertainties of the binned values of y.

step_size: float

The size of the binning step

Other Parameters:
 

yerr: iterable, optional

The uncertainties of y, to be propagated during binning.

method: {“sum” | “average” | “mean”}, optional, default “sum”

The method to be used in binning. Either sum the samples y in each new bin of x, or take the arithmetic mean.

dx: float, optional

The binning step of the initial xs

stingray.simon(message, **kwargs)[source]

The Statistical Interpretation MONitor.

A warning system designed to always remind the user that Simon is watching him/her.

Parameters:

message : string

The message that is thrown

kwargs : dict

The rest of the arguments that are passed to warnings.warn

stingray.test(package=None, test_path=None, args=None, plugins=None, verbose=False, pastebin=None, remote_data=False, pep8=False, pdb=False, coverage=False, open_files=False, **kwargs)[source]

Run the tests using py.test. A proper set of arguments is constructed and passed to pytest.main.

Parameters:

package : str, optional

The name of a specific package to test, e.g. ‘io.fits’ or ‘utils’. If nothing is specified all default tests are run.

test_path : str, optional

Specify location to test by path. May be a single file or directory. Must be specified absolutely or relative to the calling directory.

args : str, optional

Additional arguments to be passed to pytest.main in the args keyword argument.

plugins : list, optional

Plugins to be passed to pytest.main in the plugins keyword argument.

verbose : bool, optional

Convenience option to turn on verbose output from py.test. Passing True is the same as specifying '-v' in args.

pastebin : {‘failed’,’all’,None}, optional

Convenience option for turning on py.test pastebin output. Set to 'failed' to upload info for failed tests, or 'all' to upload info for all tests.

remote_data : bool, optional

Controls whether to run tests marked with @remote_data. These tests use online data and are not run by default. Set to True to run these tests.

pep8 : bool, optional

Turn on PEP8 checking via the pytest-pep8 plugin and disable normal tests. Same as specifying '--pep8 -k pep8' in args.

pdb : bool, optional

Turn on PDB post-mortem analysis for failing tests. Same as specifying '--pdb' in args.

coverage : bool, optional

Generate a test coverage report. The result will be placed in the directory htmlcov.

open_files : bool, optional

Fail when any tests leave files open. Off by default, because this adds extra run time to the test suite. Requires the psutil package.

parallel : int, optional

When provided, run the tests in parallel on the specified number of CPUs. If parallel is negative, it will use the all the cores on the machine. Requires the pytest-xdist plugin installed. Only available when using Astropy 0.3 or later.

kwargs

Any additional keywords passed into this function will be passed on to the astropy test runner. This allows use of test-related functionality implemented in later versions of astropy without explicitly updating the package template.