Stingray API

Library of Time Series Methods For Astronomical X-ray Data.

Base Class

Most stingray` classes are subclasses of a single class, stingray.StingrayObject, which implements most of the I/O functionality and common behavior (e.g. strategies to combine data and make operations such as the sum, difference, or negation). This class is not intended to be instantiated directly, but rather to be used as a base class for other classes. Any class wanting to inherit from stingray.StingrayObject should define a main_array_attr attribute, which defines the name of the attribute that will be used to store the “independent variable” main data array. For example, for all time series-like objects, the main array is the time array, while for the periodograms the main array is the frequency array. All arrays sharing the length (not the shape: they might be multi-dimensional!) of the main array are called “array attributes” and are accessible through the array_attrs method. When applying a mask or any other selection to a stingray.StingrayObject, all array attributes are filtered in the same way. Some array-like attributes might have the same length by chance, in this case the user or the developer should add these to the not_array_attr attribute. For example, stingray.StingrayTimeseries has gti among the not_array_attrs, since it is an array but not related 1-to-1 to the main array, even if in some cases it might happen to have the same numbers of elements of the main array, which is time.

StingrayObject

class stingray.StingrayObject(*args, **kwargs)[source]

This base class defines some general-purpose utilities.

The main purpose is to have a consistent mechanism for:

  • round-tripping to and from Astropy Tables and other dataframes

  • round-tripping to files in different formats

The idea is that any object inheriting StingrayObject should, just by defining an attribute called main_array_attr, be able to perform the operations above, with no additional effort.

main_array_attr is, e.g. time for StingrayTimeseries and Lightcurve, freq for Crossspectrum, energy for VarEnergySpectrum, and so on. It is the array with which all other attributes are compared: if they are of the same shape, they get saved as columns of the table/dataframe, otherwise as metadata.

add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)[source]

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)[source]

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str][source]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

data_attributes() list[str][source]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

dict() dict[source]

Return a dictionary representation of the object.

classmethod from_astropy_table(ts: Table) Tso[source]

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

classmethod from_pandas(ts: DataFrame) Tso[source]

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

classmethod from_xarray(ts: Dataset) Tso[source]

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict[source]

Give a dictionary with all non-None meta attrs of the object.

internal_array_attrs() list[str][source]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

meta_attrs() list[str][source]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str[source]

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso[source]

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)[source]

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

to_astropy_table(no_longdouble=False) Table[source]

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_pandas() DataFrame[source]

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset[source]

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

write(filename: str, fmt: str | None = None) None[source]

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


Data Classes

These classes define basic functionality related to common data types and typical methods that apply to these data types, including basic read/write functionality. Currently implemented are stingray.StingrayTimeseries, stingray.Lightcurve and stingray.events.EventList.

All time series-like data classes inherit from stingray.StingrayTimeseries, which implements most of the common functionality. The main data array is stored in the time attribute. Good Time Intervals (GTIs) are stored in the gti attribute, which is a list of 2-tuples or 2-lists containing the start and stop times of each GTI. The gti attribute is not an array attribute, since it is not related 1-to-1 to the main array, even if in some cases it might happen to have the same number of elements of the main array. It is by default added to the not_array_attr attribute.

StingrayTimeseries

class stingray.StingrayTimeseries(time: TTime = None, array_attrs: dict = {}, mjdref: TTime = 0, notes: str = '', gti: npt.ArrayLike = None, high_precision: bool = False, ephem: str = None, timeref: str = None, timesys: str = None, skip_checks: bool = False, **other_kw)[source]

Basic class for time series data.

This can be events, binned light curves, unevenly sampled light curves, etc. The only requirement is that the data (which can be any quantity, related or not to an electromagnetic measurement) are associated with a time measurement. We make a distinction between the array attributes, which have the same length of the time array, and the meta attributes, which can be scalars or arrays of different size. The array attributes can be multidimensional (e.g. a spectrum for each time bin), but their first dimension (array.shape[0]) must have same length of the time array.

Array attributes are singled out automatically depending on their shape. All filtering operations (e.g. apply_gtis, rebin, etc.) are applied to array attributes only. For this reason, it is advisable to specify whether a given attribute should not be considered as an array attribute by adding it to the not_array_attr list.

Parameters:
time: iterable

A list or array of time stamps

Other Parameters:
array_attrsdict

Array attributes to be set (e.g. {"flux": flux_array, "flux_err": flux_err_array}). In principle, they could be specified as simple keyword arguments. But this way, we will run a check on the length of the arrays, and raise an error if they are not of a shape compatible with the time array.

dt: float

The time resolution of the time series. Can be a scalar or an array attribute (useful for non-evenly sampled data or events from different instruments)

mjdreffloat

The MJD used as a reference for the time array.

gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Good Time Intervals

high_precisionbool

Change the precision of self.time to float128. Useful while dealing with fast pulsars.

timerefstr

The time reference, as recorded in the FITS file (e.g. SOLARSYSTEM)

timesysstr

The time system, as recorded in the FITS file (e.g. TDB)

ephemstr

The JPL ephemeris used to barycenter the data, if any (e.g. DE430)

skip_checksbool

Skip checks on the time array. Useful when the user is reasonably sure that the input data are valid.

**other_kw

Used internally. Any other keyword arguments will be set as attributes of the object.

Attributes:
time: numpy.ndarray

The array of time stamps, in seconds from the reference MJD defined in mjdref

not_array_attr: list

List of attributes that are never to be considered as array attributes. For example, GTIs are not array attributes.

dt: float

The time resolution of the measurements. Can be a scalar or an array attribute (useful for non-evenly sampled data or events from different instruments). It can also be 0, which means that the time series is not evenly sampled and the effects of the time resolution are considered negligible for the analysis. This is sometimes the case for events from high-energy telescopes.

mjdreffloat

The MJD used as a reference for the time array.

gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Good Time Intervals

high_precisionbool

Change the precision of self.time to float128. Useful while dealing with fast pulsars.

analyze_by_gti(func, fraction_step=1, **kwargs)[source]

Analyze the light curve with any function, on a GTI-by-GTI base.

Parameters:
funcfunction

Function accepting a StingrayTimeseries object as single argument, plus possible additional keyword arguments, and returning a number or a tuple - e.g., (result, error) where both result and error are numbers.

Returns:
start_timesarray

Lower time boundaries of all time segments.

stop_timesarray

upper time boundaries of all segments.

resultarray of N elements

The result of func for each segment of the light curve

Other Parameters:
fraction_stepfloat

By default, segments do not overlap (fraction_step = 1). If fraction_step < 1, then the start points of consecutive segments are fraction_step * segment_size apart, and consecutive segments overlap. For example, for fraction_step = 0.5, the window shifts one half of segment_size)

kwargskeyword arguments

These additional keyword arguments, if present, they will be passed to func

analyze_segments(func, segment_size, fraction_step=1, **kwargs)[source]

Analyze segments of the light curve with any function.

Parameters:
funcfunction

Function accepting a StingrayTimeseries object as single argument, plus possible additional keyword arguments, and returning a number or a tuple - e.g., (result, error) where both result and error are numbers.

segment_sizefloat

Length in seconds of the light curve segments. If None, the full GTIs are considered instead as segments.

Returns:
start_timesarray

Lower time boundaries of all time segments.

stop_timesarray

upper time boundaries of all segments.

resultarray of N elements

The result of func for each segment of the light curve

Other Parameters:
fraction_stepfloat

If the step is not a full segment_size but less (e.g. a moving window), this indicates the ratio between step step and segment_size (e.g. 0.5 means that the window shifts of half segment_size)

kwargskeyword arguments

These additional keyword arguments, if present, they will be passed to func

Examples

>>> import numpy as np
>>> time = np.arange(0, 10, 0.1)
>>> counts = np.zeros_like(time) + 10
>>> ts = StingrayTimeseries(time, counts=counts, dt=0.1)
>>> # Define a function that calculates the mean
>>> mean_func = lambda ts: np.mean(ts.counts)
>>> # Calculate the mean in segments of 5 seconds
>>> start, stop, res = ts.analyze_segments(mean_func, 5)
>>> len(res) == 2
True
>>> np.allclose(res, 10)
True
apply_gtis(new_gti=None, inplace: bool = True)[source]

Apply Good Time Intervals (GTIs) to a time series. Filters all the array attributes, only keeping the bins that fall into GTIs.

Parameters:
inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

change_mjdref(new_mjdref: float, inplace=False) StingrayTimeseries[source]

Change the MJD reference time (MJDREF) of the time series

The times of the time series will be shifted in order to be referred to this new MJDREF

Parameters:
new_mjdreffloat

New MJDREF

Returns:
new_tsStingrayTimeseries object

The new time series, shifted by MJDREF

Other Parameters:
inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

concatenate(other, check_gti=True)[source]

Concatenate two StingrayTimeseries objects.

This method concatenates two or more StingrayTimeseries objects along the time axis. GTIs are recalculated by merging all the GTIs together. GTIs should not overlap at any point.

Parameters:
otherStingrayTimeseries object or list of StingrayTimeseries objects

A second time series object, or a list of objects to be concatenated

Other Parameters:
check_gtibool

Check if the GTIs are overlapping or not. Default: True If this is True and GTIs overlap, an error is raised.

estimate_segment_size(min_counts=None, min_samples=None, even_sampling=None)[source]

Estimate a reasonable segment length for segment-by-segment analysis.

The user has to specify a criterion based on a minimum number of counts (if the time series has a counts attribute) or a minimum number of time samples. At least one between min_counts and min_samples must be specified. In the special case of a time series with dt=0 (event list-like, where each time stamp correspond to a single count), the two definitions are equivalent.

Returns:
segment_sizefloat

The length of the light curve chunks that satisfies the conditions

Other Parameters:
min_countsint

Minimum number of counts for each chunk. Optional (but needs min_samples if left unspecified). Only makes sense if the series has a counts attribute and it is evenly sampled.

min_samplesint

Minimum number of time bins. Optional (but needs min_counts if left unspecified).

even_samplingbool

Force the treatment of the data as evenly sampled or not. If None, the data are considered evenly sampled if self.dt is larger than zero and the median separation between subsequent times is within 1% of self.dt.

Examples

>>> import numpy as np
>>> time = np.arange(150)
>>> counts = np.zeros_like(time) + 3
>>> ts = StingrayTimeseries(time, counts=counts, dt=1)
>>> assert np.isclose(ts.estimate_segment_size(min_counts=10, min_samples=3), 4.0)
>>> assert np.isclose(ts.estimate_segment_size(min_counts=10, min_samples=5), 5.0)
>>> counts[2:4] = 1
>>> ts = StingrayTimeseries(time, counts=counts, dt=1)
>>> assert np.isclose(ts.estimate_segment_size(min_counts=3, min_samples=1), 3.0)
>>> # A slightly more complex example
>>> dt=0.2
>>> time = np.arange(0, 1000, dt)
>>> counts = np.random.poisson(100, size=len(time))
>>> ts = StingrayTimeseries(time, counts=counts, dt=dt)
>>> assert np.isclose(ts.estimate_segment_size(100, 2), 0.4)
>>> min_total_bins = 40
>>> assert np.isclose(ts.estimate_segment_size(100, 40), 8.0)
property exposure

Return the total exposure of the time series, i.e. the sum of the GTIs.

Returns:
total_exposurefloat

The total exposure of the time series, in seconds.

fill_bad_time_intervals(max_length=None, attrs_to_randomize=None, buffer_size=None, even_sampling=None, seed=None)[source]

Fill short bad time intervals with random data.

Warning

This method is only appropriate for very short bad time intervals. The simulated data are basically white noise, so they are able to alter the statistical properties of variable data. For very short gaps in the data, the effect of these small injections of white noise should be negligible. How short depends on the single case, the user is urged not to use the method as a black box and make simulations to measure its effect. If you have long bad time intervals, you should use more advanced techniques, not currently available in Stingray for this use case, such as Gaussian Processes. In particular, please verify that the values of max_length and buffer_size are adequate to your case.

To fill the gaps in all but the time points (i.e., flux measures, energies), we take the buffer_size (by default, the largest value between 100 and the estimated samples in a max_length-long gap) valid data points closest to the gap and repeat them randomly with the same empirical statistical distribution. So, if the my_fancy_attr attribute, in the 100 points of the buffer, has 30 times 10, 10 times 9, and 60 times 11, there will be on average 30% of 10, 60% of 11, and 10% of 9 in the simulated data.

Times are treated differently depending on the fact that the time series is evenly sampled or not. If it is not, the times are simulated from a uniform distribution with the same count rate found in the buffer. Otherwise, times just follow the same grid used inside GTIs. Using the evenly sampled or not is decided based on the even_sampling parameter. If left to None, the time series is considered evenly sampled if self.dt is greater than zero and the median separation between subsequent times is within 1% of the time resolution.

Other Parameters:
max_lengthfloat

Maximum length of a bad time interval to be filled. If None, the criterion is bad time intervals shorter than 1/100th of the longest good time interval.

attrs_to_randomizelist of str, default None

List of array_attrs to randomize. If None, all array_attrs are randomized. It should not include time and _mask, which are treated separately.

buffer_sizeint, default 100

Number of good data points to use to calculate the means and variance the random data on each side of the bad time interval

even_samplingbool, default None

Force the treatment of the data as evenly sampled or not. If None, the data are considered evenly sampled if self.dt is larger than zero and the median separation between subsequent times is within 1% of self.dt.

seedint, default None

Random seed to use for the simulation. If None, a random seed is generated.

classmethod from_astropy_timeseries(ts: TimeSeries) StingrayTimeseries[source]

Create a StingrayTimeseries from data in an Astropy TimeSeries

The timeseries has to define at least a column called time, the rest of columns will form the array attributes of the new time series, while the attributes in table.meta will form the new meta attributes of the time series.

Parameters:
tsastropy.timeseries.TimeSeries

A TimeSeries object with the array attributes as columns, and the meta attributes in the meta dictionary

Returns:
tsStingrayTimeseries

Timeseries object

join(*args, **kwargs)[source]

Join other StingrayTimeseries objects with the current one.

If both are empty, an empty StingrayTimeseries is returned.

Standard attributes such as pi and energy remain None if they are None in both. Otherwise, np.nan is used as a default value for the missing values. Arbitrary array attributes are created and joined using the same convention.

Multiple checks are done on the joined time series. If the time array of the series being joined is empty, it is ignored. If the time resolution is different, the final time series will have the rougher time resolution. If the MJDREF is different, the time reference will be changed to the one of the first time series. An empty time series will be ignored.

Note: join is not equivalent to concatenate. concatenate is used to join multiple non-overlapping time series along the time axis, while join is more general, and can be used to join multiple time series with different strategies (see parameter strategy below).

Parameters:
otherStingrayTimeseries or class:list of StingrayTimeseries

The other StingrayTimeseries object which is supposed to be joined with. If other is a list, it is assumed to be a list of StingrayTimeseries and they are all joined, one by one.

Returns:
`ts_new`StingrayTimeseries object

The resulting StingrayTimeseries object.

Other Parameters:
strategy{“intersection”, “union”, “append”, “infer”, “none”}

Method to use to merge the GTIs. If “intersection”, the GTIs are merged using the intersection of the GTIs. If “union”, the GTIs are merged using the union of the GTIs. If “none”, a single GTI with the minimum and the maximum time stamps of all GTIs is returned. If “infer”, the strategy is decided based on the GTIs. If there are no overlaps, “union” is used, otherwise “intersection” is used. If “append”, the GTIs are simply appended but they must be mutually exclusive.

plot(attr, witherrors=False, labels=None, ax=None, title=None, marker='-', save=False, filename=None, plot_btis=True, axis_limits=None)[source]

Plot the time series using matplotlib.

Plot the time series object on a graph self.time on x-axis and self.counts on y-axis with self.counts_err optionally as error bars.

Parameters:
attr: str

Attribute to plot.

Other Parameters:
witherrors: boolean, default False

Whether to plot the StingrayTimeseries with errorbars or not

labelsiterable, default None

A list or tuple with xlabel and ylabel as strings. E.g. if the attribute is 'counts', the list of labels could be ['Time (s)', 'Counts (s^-1)']

axmatplotlib.pyplot.axis object

Axis to be used for plotting. Defaults to creating a new one.

axis_limitslist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

plot_btisbool

Plot the bad time intervals as red areas on the plot

rebin(dt_new=None, f=None, method='sum')[source]

Rebin the time series to a new time resolution. While the new resolution need not be an integer multiple of the previous time resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.

Parameters:
dt_new: float

The new time resolution of the time series. Must be larger than the time resolution of the old time series!

method: {``sum`` | ``mean`` | ``average``}, optional, default ``sum``

This keyword argument sets whether the counts in the new bins should be summed or averaged.

Returns:
ts_new: StingrayTimeseries object

The StingrayTimeseries object with the new, binned time series.

Other Parameters:
f: float

the rebin factor. If specified, it substitutes dt_new with f*self.dt

shift(time_shift: float, inplace=False) StingrayTimeseries[source]

Shift the time and the GTIs by the same amount

Parameters:
time_shift: float

The time interval by which the time series will be shifted (in the same units as the time array in StingrayTimeseries

Returns:
tsStingrayTimeseries object

The new time series shifted by time_shift

Other Parameters:
inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

sort(reverse=False, inplace=False)[source]

Sort a StingrayTimeseries object by time.

A StingrayTimeseries can be sorted in either increasing or decreasing order using this method. The time array gets sorted and the counts array is changed accordingly.

Parameters:
reverseboolean, default False

If True then the object is sorted in reverse order.

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

Returns:
ts_new: StingrayTimeseries object

The StingrayTimeseries object with sorted time and counts arrays.

Examples

>>> time = [2, 1, 3]
>>> count = [200, 100, 300]
>>> ts = StingrayTimeseries(time, array_attrs={"counts": count}, dt=1, skip_checks=True)
>>> ts_new = ts.sort()
>>> ts_new.time
array([1, 2, 3])
>>> assert np.allclose(ts_new.counts, [100, 200, 300])
split_by_gti(gti=None, min_points=2)[source]

Split the current StingrayTimeseries object into a list of StingrayTimeseries objects, one for each continuous GTI segment as defined in the gti attribute.

Parameters:
min_pointsint, default 1

The minimum number of data points in each time series. Light curves with fewer data points will be ignored.

Returns:
list_of_tsslist

A list of StingrayTimeseries objects, one for each GTI segment

to_astropy_timeseries() TimeSeries[source]

Save the StingrayTimeseries to an Astropy timeseries.

Array attributes (time, pi, energy, etc.) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Returns:
tsastropy.timeseries.TimeSeries

A TimeSeries object with the array attributes as columns, and the meta attributes in the meta dictionary

truncate(start=0, stop=None, method='index')[source]

Truncate a StingrayTimeseries object.

This method takes a start and a stop point (either as indices, or as times in the same unit as those in the time attribute, and truncates all bins before start and after stop, then returns a new StingrayTimeseries object with the truncated time series.

Parameters:
startint, default 0

Index (or time stamp) of the starting point of the truncation. If no value is set for the start point, then all points from the first element in the time array are taken into account.

stopint, default None

Index (or time stamp) of the ending point (exclusive) of the truncation. If no value of stop is set, then points including the last point in the counts array are taken in count.

method{index | time}, optional, default index

Type of the start and stop values. If set to index then the values are treated as indices of the counts array, or if set to time, the values are treated as actual time values.

Returns:
ts_new: StingrayTimeseries object

The StingrayTimeseries object with truncated time and arrays.

Examples

>>> time = [1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> count = [10, 20, 30, 40, 50, 60, 70, 80, 90]
>>> ts = StingrayTimeseries(time, array_attrs={"counts": count}, dt=1)
>>> ts_new = ts.truncate(start=2, stop=8)
>>> assert np.allclose(ts_new.counts, [30, 40, 50, 60, 70, 80])
>>> assert np.allclose(ts_new.time, [3, 4, 5, 6, 7, 8])
>>> # Truncation can also be done by time values
>>> ts_new = ts.truncate(start=6, method='time')
>>> assert np.allclose(ts_new.time, [6, 7, 8, 9])
>>> assert np.allclose(ts_new.counts, [60, 70, 80, 90])

Lightcurve

class stingray.Lightcurve(time=None, counts=None, err=None, input_counts=True, gti=None, err_dist='poisson', bg_counts=None, bg_ratio=None, frac_exp=None, mjdref=0, dt=None, skip_checks=False, low_memory=False, mission=None, instr=None, header=None, **other_kw)[source]

Make a light curve object from an array of time stamps and an array of counts.

Parameters:
time: Iterable, `:class:astropy.time.Time`, or `:class:astropy.units.Quantity` object

A list or array of time stamps for a light curve. Must be a type that can be cast to :class:np.array or :class:List of floats, or that has a value attribute that does (e.g. a :class:astropy.units.Quantity or :class:astropy.time.Time object).

counts: iterable, optional, default ``None``

A list or array of the counts in each bin corresponding to the bins defined in time (note: use input_counts=False to input the count range, i.e. counts/second, otherwise use counts/bin).

err: iterable, optional, default ``None``

A list or array of the uncertainties in each bin corresponding to the bins defined in time (note: use input_counts=False to input the count rage, i.e. counts/second, otherwise use counts/bin). If None, we assume the data is poisson distributed and calculate the error from the average of the lower and upper 1-sigma confidence intervals for the Poissonian distribution with mean equal to counts.

input_counts: bool, optional, default True

If True, the code assumes that the input data in counts is in units of counts/bin. If False, it assumes the data in counts is in counts/second.

gti: 2-d float array, default ``None``

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time Intervals. They are not applied to the data by default. They will be used by other methods to have an indication of the “safe” time intervals to use during analysis.

err_dist: str, optional, default ``None``

Statistical distribution used to calculate the uncertainties and other statistical values appropriately. Default makes no assumptions and keep errors equal to zero.

bg_counts: iterable,`:class:numpy.array` or `:class:List` of floats, optional, default ``None``

A list or array of background counts detected in the background extraction region in each bin corresponding to the bins defined in time.

bg_ratio: iterable, `:class:numpy.array` or `:class:List` of floats, optional, default ``None``

A list or array of source region area to background region area ratio in each bin. These are factors by which the bg_counts should be scaled to estimate background counts within the source aperture.

frac_exp: iterable, `:class:numpy.array` or `:class:List` of floats, optional, default ``None``

A list or array of fractional exposers in each bin.

mjdref: float

MJD reference (useful in most high-energy mission data)

dt: float or array of floats. Default median(diff(time))

Time resolution of the light curve. Can be an array of the same dimension as time specifying width of each bin.

skip_checks: bool

If True, the user specifies that data are already sorted and contain no infinite or nan points. Use at your own risk

low_memory: bool

If True, all the lazily evaluated attribute (e.g., countrate and countrate_err if input_counts is True) will _not_ be stored in memory, but calculated every time they are requested.

missionstr

Mission that recorded the data (e.g. NICER)

instrstr

Instrument onboard the mission

headerstr

The full header of the original FITS file, if relevant

**other_kw

Used internally. Any other keyword arguments will be ignored

Attributes:
time: numpy.ndarray

The array of midpoints of time bins.

bin_lo: numpy.ndarray

The array of lower time stamp of time bins.

bin_hi: numpy.ndarray

The array of higher time stamp of time bins.

counts: numpy.ndarray

The counts per bin corresponding to the bins in time.

counts_err: numpy.ndarray

The uncertainties corresponding to counts

bg_counts: numpy.ndarray

The background counts corresponding to the bins in time.

bg_ratio: numpy.ndarray

The ratio of source region area to background region area corresponding to each bin.

frac_exp: numpy.ndarray

The fractional exposers in each bin.

countrate: numpy.ndarray

The counts per second in each of the bins defined in time.

countrate_err: numpy.ndarray

The uncertainties corresponding to countrate

meanrate: float

The mean count rate of the light curve.

meancounts: float

The mean counts of the light curve.

n: int

The number of data points in the light curve.

dt: float or array of floats

The time resolution of the light curve.

mjdref: float

MJD reference date (tstart / 86400 gives the date in MJD at the start of the observation)

tseg: float

The total duration of the light curve.

tstart: float

The start time of the light curve.

gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time Intervals. They indicate the “safe” time intervals to be used during the analysis of the light curve.

err_dist: string

Statistic of the Lightcurve, it is used to calculate the uncertainties and other statistical values appropriately. It propagates to Spectrum classes.

missionstr

Mission that recorded the data (e.g. NICER)

instrstr

Instrument onboard the mission

detector_iditerable

The detector that recoded each photon, if relevant (e.g. XMM, Chandra)

headerstr

The full header of the original FITS file, if relevant

analyze_lc_chunks(segment_size, func, fraction_step=1, **kwargs)[source]

Analyze segments of the light curve with any function.

Deprecated since version 2.0: Use Lightcurve.analyze_segments(func, segment_size)() instead.

Parameters:
segment_sizefloat

Length in seconds of the light curve segments

funcfunction

Function accepting a Lightcurve object as single argument, plus possible additional keyword arguments, and returning a number or a tuple - e.g., (result, error) where both result and error are numbers.

Returns:
start_timesarray

Lower time boundaries of all time segments.

stop_timesarray

upper time boundaries of all segments.

resultarray of N elements

The result of func for each segment of the light curve

Other Parameters:
fraction_stepfloat

By default, segments do not overlap (fraction_step = 1). If fraction_step < 1, then the start points of consecutive segments are fraction_step * segment_size apart, and consecutive segments overlap. For example, for fraction_step = 0.5, the window shifts one half of segment_size)

kwargskeyword arguments

These additional keyword arguments, if present, they will be passed to func

apply_gtis(inplace=True)[source]

Apply GTIs to a light curve. Filters the time, counts, countrate, counts_err and countrate_err arrays for all bins that fall into Good Time Intervals and recalculates mean countrate and the number of bins.

Parameters:
inplacebool

If True, overwrite the current light curve. Otherwise, return a new one.

baseline(lam, p, niter=10, offset_correction=False)[source]

Calculate the baseline of the light curve, accounting for GTIs.

Parameters:
lamfloat

“smoothness” parameter. Larger values make the baseline stiffer Typically 1e2 < lam < 1e9

pfloat

“asymmetry” parameter. Smaller values make the baseline more “horizontal”. Typically 0.001 < p < 0.1, but not necessary.

Returns:
baselinenumpy.ndarray

An array with the baseline of the light curve

Other Parameters:
offset_correctionbool, default False

by default, this method does not align to the running mean of the light curve, but it goes below the light curve. Setting align to True, an additional step is done to shift the baseline so that it is shifted to the middle of the light curve noise distribution.

bexvar()[source]

Finds posterior samples of Bayesian excess variance (bexvar) for the light curve. It requires source counts in counts and time intervals for each bin. If the dt is an array then uses its elements as time intervals for each bin. If dt is float, it calculates the time intervals by assuming all intervals to be equal to dt.

Returns:
lc_bexvariterable, :class:numpy.array of floats

An array of posterior samples of Bayesian excess variance (bexvar).

check_lightcurve()[source]

Make various checks on the lightcurve.

It can be slow, use it if you are not sure about your input data.

estimate_chunk_length(*args, **kwargs)[source]

Deprecated alias of estimate_segment_size.

estimate_segment_size(min_counts=100, min_samples=100, even_sampling=None)[source]

Estimate a reasonable segment length for segment-by-segment analysis.

The user has to specify a criterion based on a minimum number of counts (if the time series has a counts attribute) or a minimum number of time samples. At least one between min_counts and min_samples must be specified.

Returns:
segment_sizefloat

The length of the light curve chunks that satisfies the conditions

Other Parameters:
min_countsint

Minimum number of counts for each chunk. Optional (but needs min_samples if left unspecified). Only makes sense if the series has a counts attribute and it is evenly sampled.

min_samplesint

Minimum number of time bins. Optional (but needs min_counts if left unspecified).

even_samplingbool

Force the treatment of the data as evenly sampled or not. If None, the data are considered evenly sampled if self.dt is larger than zero and the median separation between subsequent times is within 1% of self.dt.

Examples

>>> import numpy as np
>>> time = np.arange(150)
>>> count = np.zeros_like(time) + 3
>>> lc = Lightcurve(time, count, dt=1)
>>> assert np.isclose(
...     lc.estimate_segment_size(min_counts=10, min_samples=3), 4)
>>> assert np.isclose(lc.estimate_segment_size(min_counts=10, min_samples=5), 5)
>>> count[2:4] = 1
>>> lc = Lightcurve(time, count, dt=1)
>>> assert np.isclose(lc.estimate_segment_size(min_counts=3, min_samples=1), 3)
>>> # A slightly more complex example
>>> dt=0.2
>>> time = np.arange(0, 1000, dt)
>>> counts = np.random.poisson(100, size=len(time))
>>> lc = Lightcurve(time, counts, dt=dt)
>>> assert np.isclose(lc.estimate_segment_size(100, 2), 0.4)
>>> min_total_bins = 40
>>> assert np.isclose(lc.estimate_segment_size(100, 40), 8.0)
static from_astropy_table(ts, **kwargs)[source]

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

static from_astropy_timeseries(ts, **kwargs)[source]

Create a StingrayTimeseries from data in an Astropy TimeSeries

The timeseries has to define at least a column called time, the rest of columns will form the array attributes of the new time series, while the attributes in table.meta will form the new meta attributes of the time series.

Parameters:
tsastropy.timeseries.TimeSeries

A TimeSeries object with the array attributes as columns, and the meta attributes in the meta dictionary

Returns:
tsStingrayTimeseries

Timeseries object

static from_lightkurve(lk, skip_checks=True)[source]

Creates a new Lightcurve from a lightkurve.LightCurve.

Parameters:
lklightkurve.LightCurve

A lightkurve LightCurve object

skip_checks: bool

If True, the user specifies that data are already sorted and contain no infinite or nan points. Use at your own risk.

join(other, skip_checks=False)[source]

Join two lightcurves into a single object.

The new Lightcurve object will contain time stamps from both the objects. The counts and countrate attributes in the resulting object will contain the union of the non-overlapping parts of the two individual objects, or the average in case of overlapping time arrays of both Lightcurve objects.

Good Time Intervals are also joined.

Note : Ideally, the time array of both lightcurves should not overlap.

Parameters:
otherLightcurve object

The other Lightcurve object which is supposed to be joined with.

skip_checks: bool

If True, the user specifies that data are already sorted and contain no infinite or nan points. Use at your own risk.

Returns:
lc_newLightcurve object

The resulting Lightcurve object.

Examples

>>> time1 = [5, 10, 15]
>>> count1 = [300, 100, 400]
>>> time2 = [20, 25, 30]
>>> count2 = [600, 1200, 800]
>>> lc1 = Lightcurve(time1, count1, dt=5)
>>> lc2 = Lightcurve(time2, count2, dt=5)
>>> lc = lc1.join(lc2)
>>> lc.time
array([ 5, 10, 15, 20, 25, 30])
>>> assert np.allclose(lc.counts, [ 300,  100,  400,  600, 1200,  800])
static make_lightcurve(toa, dt, tseg=None, tstart=None, gti=None, mjdref=0, use_hist=False)[source]

Make a light curve out of photon arrival times, with a given time resolution dt. Note that dt should be larger than the native time resolution of the instrument that has taken the data.

Parameters:
toa: iterable

list of photon arrival times

dt: float

time resolution of the light curve (the bin width)

tseg: float, optional, default ``None``

The total duration of the light curve. If this is None, then the total duration of the light curve will be the interval between the arrival between either the first and the last gti boundary or, if gti is not set, the first and the last photon in toa.

Note: If tseg is not divisible by dt (i.e. if tseg/dt is not an integer number), then the last fractional bin will be dropped!

tstart: float, optional, default ``None``

The start time of the light curve. If this is None, either the first gti boundary or, if not available, the arrival time of the first photon will be used as the start time of the light curve.

gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time Intervals

use_histbool

Use np.histogram instead of np.bincounts. Might be advantageous for very short datasets.

Returns:
lc: Lightcurve object

A Lightcurve object with the binned light curve

plot(witherrors=False, labels=None, ax=None, title=None, marker='-', save=False, filename=None, axis_limits=None, axis=None, plot_btis=True)[source]

Plot the light curve using matplotlib.

Plot the light curve object on a graph self.time on x-axis and self.counts on y-axis with self.counts_err optionally as error bars.

Parameters:
witherrors: boolean, default False

Whether to plot the Lightcurve with errorbars or not

labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axis_limitslist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

axislist, tuple, string, default None

Deprecated in favor of axis_limits, same functionality.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.pyplot.axis object

Axis to be used for plotting. Defaults to creating a new one.

plot_btisbool

Plot the bad time intervals as red areas on the plot

classmethod read(filename, fmt=None, format_=None, err_dist='gauss', skip_checks=False, **fits_kwargs)[source]

Read a Lightcurve object from file.

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • hea : FITS Light curves from HEASARC-supported missions.

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a time column and a counts or countrate column. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all lightcurve attributes such as dt, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
lcLightcurve object
Other Parameters:
err_dist: str, default=’gauss’

Default error distribution if not specified in the file (e.g. for ASCII files). The default is ‘gauss’ just because it is likely that people using ASCII light curves will want to specify Gaussian error bars, if any.

skip_checksbool

See Lightcurve documentation

**fits_kwargsadditional keyword arguments

Any other arguments to be passed to lcurve_from_fits (only relevant for hea/ogip formats)

rebin(dt_new=None, f=None, method='sum')[source]

Rebin the light curve to a new time resolution. While the new resolution need not be an integer multiple of the previous time resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.

Parameters:
dt_new: float

The new time resolution of the light curve. Must be larger than the time resolution of the old light curve!

method: {``sum`` | ``mean`` | ``average``}, optional, default ``sum``

This keyword argument sets whether the counts in the new bins should be summed or averaged.

Returns:
lc_new: Lightcurve object

The Lightcurve object with the new, binned light curve.

Other Parameters:
f: float

the rebin factor. If specified, it substitutes dt_new with f*self.dt

sort(reverse=False, inplace=False)[source]

Sort a Lightcurve object by time.

A Lightcurve can be sorted in either increasing or decreasing order using this method. The time array gets sorted and the counts array is changed accordingly.

Parameters:
reverseboolean, default False

If True then the object is sorted in reverse order.

inplacebool

If True, overwrite the current light curve. Otherwise, return a new one.

Returns:
lc_new: Lightcurve object

The Lightcurve object with sorted time and counts arrays.

Examples

>>> time = [2, 1, 3]
>>> count = [200, 100, 300]
>>> lc = Lightcurve(time, count, dt=1, skip_checks=True)
>>> lc_new = lc.sort()
>>> lc_new.time
array([1, 2, 3])
>>> assert np.allclose(lc_new.counts, [100, 200, 300])
sort_counts(reverse=False, inplace=False)[source]

Sort a Lightcurve object in accordance with its counts array.

A Lightcurve can be sorted in either increasing or decreasing order using this method. The counts array gets sorted and the time array is changed accordingly.

Parameters:
reverseboolean, default False

If True then the object is sorted in reverse order.

inplacebool

If True, overwrite the current light curve. Otherwise, return a new one.

Returns:
lc_new: Lightcurve object

The Lightcurve object with sorted time and counts arrays.

Examples

>>> time = [1, 2, 3]
>>> count = [200, 100, 300]
>>> lc = Lightcurve(time, count, dt=1, skip_checks=True)
>>> lc_new = lc.sort_counts()
>>> lc_new.time
array([2, 1, 3])
>>> assert np.allclose(lc_new.counts, [100, 200, 300])
split(min_gap, min_points=1)[source]

For data with gaps, it can sometimes be useful to be able to split the light curve into separate, evenly sampled objects along those data gaps. This method allows to do this: it finds data gaps of a specified minimum size, and produces a list of new Lightcurve objects for each contiguous segment.

Parameters:
min_gapfloat

The length of a data gap, in the same units as the time attribute of the Lightcurve object. Any smaller gaps will be ignored, any larger gaps will be identified and used to split the light curve.

min_pointsint, default 1

The minimum number of data points in each light curve. Light curves with fewer data points will be ignored.

Returns:
lc_splititerable of Lightcurve objects

The list of all contiguous light curves

Examples

>>> time = np.array([1, 2, 3, 6, 7, 8, 11, 12, 13])
>>> counts = np.random.rand(time.shape[0])
>>> lc = Lightcurve(time, counts, dt=1, skip_checks=True)
>>> split_lc = lc.split(1.5)
to_astropy_table(**kwargs)[source]

Save the light curve to an astropy.table.Table object.

The time array and all the array attributes become columns. The meta attributes become metadata of the astropy.table.Table object.

Other Parameters:
no_longdoublebool, default False

If True, the data are converted to double precision before being saved. This is useful, e.g., for saving to FITS files, which do not support long double precision.

to_astropy_timeseries(**kwargs)[source]

Save the light curve to an astropy.timeseries.TimeSeries object.

The time array and all the array attributes become columns. The meta attributes become metadata of the astropy.timeseries.TimeSeries object. The time array is saved as a TimeDelta object.

Other Parameters:
no_longdoublebool, default False

If True, the data are converted to double precision before being saved. This is useful, e.g., for saving to FITS files, which do not support long double precision.

to_lightkurve()[source]

Returns a lightkurve.LightCurve object. This feature requires Lightkurve to be installed (e.g. pip install lightkurve). An ImportError will be raised if this package is not available.

Returns:
lightcurvelightkurve.LightCurve

A lightkurve LightCurve object.

truncate(start=0, stop=None, method='index')[source]

Truncate a Lightcurve object.

This method takes a start and a stop point (either as indices, or as times in the same unit as those in the time attribute, and truncates all bins before start and after stop, then returns a new Lightcurve object with the truncated light curve.

Parameters:
startint, default 0

Index (or time stamp) of the starting point of the truncation. If no value is set for the start point, then all points from the first element in the time array are taken into account.

stopint, default None

Index (or time stamp) of the ending point (exclusive) of the truncation. If no value of stop is set, then points including the last point in the counts array are taken in count.

method{index | time}, optional, default index

Type of the start and stop values. If set to index then the values are treated as indices of the counts array, or if set to time, the values are treated as actual time values.

Returns:
lc_new: Lightcurve object

The Lightcurve object with truncated time and counts arrays.

Examples

>>> time = [1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> count = [10, 20, 30, 40, 50, 60, 70, 80, 90]
>>> lc = Lightcurve(time, count, dt=1)
>>> lc_new = lc.truncate(start=2, stop=8)
>>> assert np.allclose(lc_new.counts, [30, 40, 50, 60, 70, 80])
>>> lc_new.time
array([3, 4, 5, 6, 7, 8])
>>> # Truncation can also be done by time values
>>> lc_new = lc.truncate(start=6, method='time')
>>> lc_new.time
array([6, 7, 8, 9])
>>> assert np.allclose(lc_new.counts, [60, 70, 80, 90])

EventList

class stingray.events.EventList(time=None, energy=None, ncounts=None, mjdref=0, dt=0, notes='', gti=None, pi=None, high_precision=False, mission=None, instr=None, header=None, detector_id=None, ephem=None, timeref=None, timesys=None, rmf_file=None, skip_checks=False, **other_kw)[source]

Basic class for event list data. Event lists generally correspond to individual events (e.g. photons) recorded by the detector, and their associated properties. For X-ray data where this type commonly occurs, events are time stamps of when a photon arrived in the detector, and (optionally) the photon energy associated with the event.

Parameters:
time: iterable

A list or array of time stamps

Other Parameters:
dt: float

The time resolution of the events. Only relevant when using events to produce light curves with similar bin time.

energy: iterable

A list of array of photon energy values in keV

mjdreffloat

The MJD used as a reference for the time array.

ncounts: int

Number of desired data points in event list. Deprecated

gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Good Time Intervals

piinteger, numpy.ndarray

PI channels

notesstr

Any useful annotations

high_precisionbool

Change the precision of self.time to float128. Useful while dealing with fast pulsars.

missionstr

Mission that recorded the data (e.g. NICER)

instrstr

Instrument onboard the mission

headerstr

The full header of the original FITS file, if relevant

detector_iditerable

The detector that recorded each photon (if the instrument has more than one, e.g. XMM/EPIC-pn)

timerefstr

The time reference, as recorded in the FITS file (e.g. SOLARSYSTEM)

timesysstr

The time system, as recorded in the FITS file (e.g. TDB)

ephemstr

The JPL ephemeris used to barycenter the data, if any (e.g. DE430)

rmf_filestr, default None

The file name of the RMF file to use for calibration.

skip_checksbool, default False

Skip checks for the validity of the event list. Use with caution.

**other_kw

Used internally. Any other keyword arguments will be ignored

Attributes:
time: numpy.ndarray

The array of event arrival times, in seconds from the reference MJD defined in mjdref

energy: numpy.ndarray

The array of photon energy values

ncounts: int

The number of data points in the event list

dt: float

The time resolution of the events. Only relevant when using events to produce light curves with similar bin time.

mjdreffloat

The MJD used as a reference for the time array.

gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Good Time Intervals

piinteger, numpy.ndarray

PI channels

high_precisionbool

Change the precision of self.time to float128. Useful while dealing with fast pulsars.

missionstr

Mission that recorded the data (e.g. NICER)

instrstr

Instrument onboard the mission

detector_iditerable

The detector that recoded each photon, if relevant (e.g. XMM, Chandra)

headerstr

The full header of the original FITS file, if relevant

apply_deadtime(deadtime, inplace=False, **kwargs)[source]

Apply deadtime filter to this event list.

Additional arguments in kwargs are passed to get_deadtime_mask

Parameters:
deadtimefloat

Value of dead time to apply to data

inplacebool, default False

If True, apply the deadtime to the current event list. Otherwise, return a new event list.

Returns:
new_event_listEventList object

Filtered event list. if inplace is True, this is the input object filtered for deadtime, otherwise this is a new object.

additional_outputobject

Only returned if return_all is True. See get_deadtime_mask for more details.

Examples

>>> events = np.array([1, 1.05, 1.07, 1.08, 1.1, 2, 2.2, 3, 3.1, 3.2])
>>> events = EventList(events, gti=[[0, 3.3]])
>>> events.pi=np.array([1, 2, 2, 2, 2, 1, 1, 1, 2, 1])
>>> events.energy=np.array([1, 2, 2, 2, 2, 1, 1, 1, 2, 1])
>>> events.mjdref = 10
>>> filt_events, retval = events.apply_deadtime(0.11, inplace=False,
...                                             verbose=False,
...                                             return_all=True)
>>> assert filt_events is not events
>>> expected = np.array([1, 2, 2.2, 3, 3.2])
>>> assert np.allclose(filt_events.time, expected)
>>> assert np.allclose(filt_events.pi, 1)
>>> assert np.allclose(filt_events.energy, 1)
>>> assert not np.allclose(events.pi, 1)
>>> filt_events = events.apply_deadtime(0.11, inplace=True,
...                                     verbose=False)
>>> assert filt_events is events
convert_pi_to_energy(rmf_file)[source]

Calibrate the energy column of the event list.

Defines the energy attribute of the event list by converting the PI channels to energy using the provided RMF file.

Parameters:
rmf_filestr

The file name of the RMF file to use for calibration.

filter_energy_range(energy_range, inplace=False, use_pi=False)[source]

Filter the event list from a given energy range.

Parameters:
energy_range: [float, float]

Energy range in keV, or in PI channel (if use_pi is True)

Other Parameters:
inplacebool, default False

Do the change in place (modify current event list). Otherwise, copy to a new event list.

use_pibool, default False

Use PI channel instead of energy in keV

Examples

>>> events = EventList(time=[0, 1, 2], energy=[0.3, 0.5, 2], pi=[3, 5, 20])
>>> e1 = events.filter_energy_range([0, 1])
>>> assert np.allclose(e1.time, [0, 1])
>>> assert np.allclose(events.time, [0, 1, 2])
>>> e2 = events.filter_energy_range([0, 10], use_pi=True, inplace=True)
>>> assert np.allclose(e2.time, [0, 1])
>>> assert np.allclose(events.time, [0, 1])
static from_lc(lc)[source]

Create an EventList from a stingray.Lightcurve object. Note that all events in a given time bin will have the same time stamp.

Bins with negative counts will be ignored.

Parameters:
lc: :class:`stingray.Lightcurve` object

Light curve to use for creation of the event list.

Returns:
ev: EventList object

The resulting list of photon arrival times generated from the light curve.

get_color_evolution(energy_ranges, segment_size=None, use_pi=False)[source]

Compute the color in equal-length segments of the event list.

Parameters:
energy_ranges2x2 list

List of energy ranges to compute the color: [[en1_min, en1_max], [en2_min, en2_max]]

segment_sizefloat

Segment size in seconds. If None, the full GTIs are considered instead as segments.

Returns:
colorarray-like

Array of colors, computed in each segment as the ratio of the counts in the second energy range to the counts in the first energy range.

Other Parameters:
use_pibool, default False

Use PI channel instead of energy in keV

get_energy_mask(energy_range, use_pi=False)[source]

Get a mask corresponding to events with a given energy range.

Parameters:
energy_range: [float, float]

Energy range in keV, or in PI channel (if use_pi is True)

Other Parameters:
use_pibool, default False

Use PI channel instead of energy in keV

get_intensity_evolution(energy_range, segment_size=None, use_pi=False)[source]

Compute the intensity in equal-length segments (or full GTIs) of the event list.

Parameters:
energy_range[en1_min, en1_max]

Energy range to compute the intensity

segment_sizefloat

Segment size in seconds. If None, the full GTIs are considered instead as segments.

Returns:
intensityarray-like

Array of intensities (in counts/s), computed in each segment.

Other Parameters:
use_pibool, default False

Use PI channel instead of energy in keV

join(other, strategy='infer')[source]

Join two EventList objects into one.

If both are empty, an empty EventList is returned.

GTIs are crossed if the event lists are over a common time interval, and appended otherwise.

Standard attributes such as pi and energy remain None if they are None in both. Otherwise, np.nan is used as a default value for the EventList where they were None. Arbitrary attributes (e.g., Stokes parameters in polarimetric data) are created and joined using the same convention.

Multiple checks are done on the joined event lists. If the time array of the event list being joined is empty, it is ignored. If the time resolution is different, the final event list will have the rougher time resolution. If the MJDREF is different, the time reference will be changed to the one of the first event list. An empty event list will be ignored.

Parameters:
otherEventList object or class:list of EventList objects

The other EventList object which is supposed to be joined with. If other is a list, it is assumed to be a list of EventList objects and they are all joined, one by one.

Returns:
`ev_new`EventList object

The resulting EventList object.

Other Parameters:
strategy{“intersection”, “union”, “append”, “infer”, “none”}

Method to use to merge the GTIs. If “intersection”, the GTIs are merged using the intersection of the GTIs. If “union”, the GTIs are merged using the union of the GTIs. If “none”, a single GTI with the minimum and the maximum time stamps of all GTIs is returned. If “infer”, the strategy is decided based on the GTIs. If there are no overlaps, “union” is used, otherwise “intersection” is used. If “append”, the GTIs are simply appended but they must be mutually exclusive.

property ncounts

Number of events in the event list.

classmethod read(filename, fmt=None, rmf_file=None, **kwargs)[source]

Read a EventList object from file.

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • hea or ogip : FITS Event files from (well, some) HEASARC-supported missions.

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a time column. Other recognized columns are energy and pi. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all eventlist attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
ev: EventList object

The EventList object reconstructed from file

Other Parameters:
rmf_filestr, default None

The file name of the RMF file to use for energy calibration. Defaults to None, which implies no channel->energy conversion at this stage (or a default calibration applied to selected missions).

kwargsdict

Any further keyword arguments to be passed to load_events_and_gtis for reading in event lists in OGIP/HEASOFT format

simulate_energies(spectrum, use_spline=False)[source]

Assign (simulate) energies to event list from a spectrum.

Parameters:
spectrum: 2-d array or list [energies, spectrum]

Energies versus corresponding fluxes. The 2-d array or list must have energies across the first dimension and fluxes across the second one. If the dimension of the energies is the same as spectrum, they are interpreted as bin centers. If it is longer by one, they are interpreted as proper bin edges (similarly to the bins of np.histogram). Note that for non-uniformly binned spectra, it is advisable to pass the exact edges.

simulate_times(lc, use_spline=False, bin_time=None)[source]

Simulate times from an input light curve.

Randomly simulate photon arrival times to an EventList from a stingray.Lightcurve object, using the inverse CDF method.

..note::

Preferably use model light curves containing no Poisson noise, as this method will intrinsically add Poisson noise to them.

Parameters:
lc: :class:`stingray.Lightcurve` object
Returns:
timesarray-like

Simulated photon arrival times

Other Parameters:
use_splinebool

Approximate the light curve with a spline to avoid binning effects

bin_timefloat default None

Ignored and deprecated, maintained for backwards compatibility.

sort(inplace=False)[source]

Sort the event list in time.

Returns:
eventlistEventList

The sorted event list. If inplace=True, it will be a shallow copy of self.

Other Parameters:
inplacebool, default False

Sort in place. If False, return a new event list.

Examples

>>> events = EventList(time=[0, 2, 1], energy=[0.3, 2, 0.5], pi=[3, 20, 5],
...                    skip_checks=True)
>>> e1 = events.sort()
>>> assert np.allclose(e1.time, [0, 1, 2])
>>> assert np.allclose(e1.energy, [0.3, 0.5, 2])
>>> assert np.allclose(e1.pi, [3, 5, 20])

But the original event list has not been altered (inplace=False by default): >>> assert np.allclose(events.time, [0, 2, 1])

Let’s do it in place instead >>> e2 = events.sort(inplace=True) >>> assert np.allclose(e2.time, [0, 1, 2])

In this case, the original event list has been altered. >>> assert np.allclose(events.time, [0, 1, 2])

to_binned_timeseries(dt, array_attrs=None)[source]

Convert the event list to a binned stingray.StingrayTimeseries object.

The result will be something similar to a light curve, but with arbitrary attributes corresponding to a weighted sum of each specified attribute of the event list.

E.g. if the event list has a q attribute, the final time series will have a q attribute, which is the sum of all q values in each time bin.

Parameters:
dt: float

Binning time of the light curve

Returns:
lc: stingray.Lightcurve object
Other Parameters:
array_attrs: list of str

List of attributes to be converted to light curve arrays. If None, all array attributes will be converted.

to_lc(dt, tstart=None, tseg=None)[source]

Convert event list to a stingray.Lightcurve object.

Parameters:
dt: float

Binning time of the light curve

Returns:
lc: stingray.Lightcurve object
Other Parameters:
tstartfloat

Start time of the light curve

tseg: float

Total duration of light curve

to_lc_iter(dt, segment_size=None)[source]

Convert event list to a generator of Lightcurves.

Parameters:
dt: float

Binning time of the light curves

Returns:
lc_gen: generator

Generates one stingray.Lightcurve object for each GTI or segment

Other Parameters:
segment_sizefloat, default None

Optional segment size. If None, use the GTI boundaries

to_lc_list(dt, segment_size=None)[source]

Convert event list to a list of Lightcurves.

Parameters:
dt: float

Binning time of the light curves

Returns:
lc_list: List

List containing one stingray.Lightcurve object for each GTI or segment

Other Parameters:
segment_sizefloat, default None

Optional segment size. If None, use the GTI boundaries


Fourier Products

These classes implement commonly used Fourier analysis products, most importantly Crossspectrum and Powerspectrum, along with the variants for averaged cross/power spectra.

Crossspectrum

class stingray.Crossspectrum(data1=None, data2=None, norm='frac', gti=None, lc1=None, lc2=None, power_type='all', dt=None, fullspec=False, skip_checks=False, save_all=False)[source]
classical_significances(threshold=1, trial_correction=False)[source]

Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chi-square distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.

Note that this function will only produce correct results when the following underlying assumptions are fulfilled:

  1. The power spectrum is Leahy-normalized

  2. There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!

  3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time)

By default, the method produces (index,p-values) for all powers in the power spectrum, where index is the numerical index of the power in question. If a threshold is set, then only powers with p-values below that threshold with their respective indices. If trial_correction is set to True, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.

Parameters:
thresholdfloat, optional, default 1

The threshold to be used when reporting p-values of potentially significant powers. Must be between 0 and 1. Default is 1 (all p-values will be reported).

trial_correctionbool, optional, default False

A Boolean flag that sets whether the threshold will be corrected by the number of frequencies before being applied. This decreases the threshold (p-values need to be lower to count as significant). Default is False (report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!

Returns:
pvalsiterable

A list of (index, p-value) tuples for all powers that have p-values lower than the threshold specified in threshold.

coherence()[source]

Compute Coherence function of the cross spectrum.

Coherence is defined in Vaughan and Nowak, 1996 [1]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.

Returns:
cohnumpy.ndarray

Coherence function

References

deadtime_correct(dead_time, rate, background_rate=0, limit_k=200, n_approx=None, paralyzable=False)[source]

Correct the power spectrum for dead time effects.

This correction is based on the formula given in Zhang et al. 2015, assuming a constant dead time for all events. For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad

Parameters:
dead_time: float

The dead time of the detector.

ratefloat

Detected source count rate

Returns:
spectrum: Crossspectrum or derivative.

The dead-time corrected spectrum.

Other Parameters:
background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

paralyzable: bool, default False

If True, the dead time correction is done assuming a paralyzable dead time. If False, the correction is done assuming a non-paralyzable (more common) dead time.

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

static from_events(events1, events2, dt, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)[source]

Calculate AveragedCrossspectrum from two event lists

Parameters:
events1stingray.EventList

Events from channel 1

events2stingray.EventList

Events from channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_lc_iterable(iter_lc1, iter_lc2, dt, segment_size, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)[source]

Calculate AveragedCrossspectrum from two light curves

Parameters:
iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 1. If arrays, use them as counts

iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 2. If arrays, use them as counts

dtfloat

The time resolution of the light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

save_allbool, default False

If True, save the cross spectrum of each segment in the cs_all attribute of the output Crossspectrum object.

static from_lightcurve(lc1, lc2, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)[source]

Calculate AveragedCrossspectrum from two light curves

Parameters:
lc1stingray.Lightcurve

Light curve from channel 1

lc2stingray.Lightcurve

Light curve from channel 2

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_stingray_timeseries(ts1, ts2, flux_attr, error_flux_attr=None, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)[source]

Calculate AveragedCrossspectrum from two light curves

Parameters:
ts1stingray.Timeseries

Time series from channel 1

ts2stingray.Timeseries

Time series from channel 2

flux_attrstr

What attribute of the time series will be used.

Other Parameters:
error_flux_attrstr

What attribute of the time series will be used as error bar.

segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_time_array(times1, times2, dt, segment_size=None, gti=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True)[source]

Calculate AveragedCrossspectrum from two arrays of event times.

Parameters:
times1np.array

Event arrival times of channel 1

times2np.array

Event arrival times of channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum.

gti[[gti0, gti1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

initial_checks(data1=None, data2=None, norm='frac', gti=None, lc1=None, lc2=None, segment_size=None, power_type='real', dt=None, fullspec=False)[source]

Run initial checks on the input.

Returns True if checks are passed, False if they are not.

Raises various errors for different bad inputs

Examples

>>> times = np.arange(0, 10)
>>> counts = np.random.poisson(100, 10)
>>> lc1 = Lightcurve(times, counts, skip_checks=True)
>>> lc2 = Lightcurve(times, counts, skip_checks=True)
>>> ev1 = EventList(times)
>>> ev2 = EventList(times)
>>> c = Crossspectrum()
>>> ac = AveragedCrossspectrum()

If norm is not a string, raise a TypeError >>> Crossspectrum.initial_checks(c, norm=1) Traceback (most recent call last): … TypeError: norm must be a string…

If norm is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, norm=”blabla”) Traceback (most recent call last): … ValueError: norm must be ‘frac’…

If power_type is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, power_type=”blabla”) Traceback (most recent call last): … ValueError: power_type not recognized!

If the user passes only one light curve, raise a ValueError

>>> Crossspectrum.initial_checks(c, data1=lc1, data2=None)
Traceback (most recent call last):
...
ValueError: You can't do a cross spectrum...

If the user passes an event list without dt, raise a ValueError

>>> Crossspectrum.initial_checks(c, data1=ev1, data2=ev2, dt=None)
Traceback (most recent call last):
...
ValueError: If using event lists, please specify...
phase_lag()[source]

Calculate the fourier phase lag of the cross spectrum.

This is defined as the argument of the complex cross spectrum, and gives the delay at all frequencies, in cycles, of one input light curve with respect to the other.

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)[source]

Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.

rebin(df=None, f=None, method='mean')[source]

Rebin the cross spectrum to a new frequency resolution df.

Parameters:
df: float

The new frequency resolution

Returns:
bin_cs = Crossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class AveragedPowerspectrum, too.

Other Parameters:
f: float

the rebin factor. If specified, it substitutes df with f*self.df

rebin_log(f=0.01)[source]

Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:

\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``

parameter that steers the frequency resolution

Returns:
new_specCrossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class

time_lag()[source]

Calculate the fourier time lag of the cross spectrum. The time lag is calculated by taking the phase lag \(\phi\) and

..math:

\tau = \frac{\phi}{\two pi \nu}

where \(\nu\) is the center of the frequency bins.

to_norm(norm, inplace=False)[source]

Convert Cross spectrum to new normalization.

Parameters:
normstr

The new normalization of the spectrum

Returns:
new_specobject, same class as input

The new, normalized, spectrum.

Other Parameters:
inplace: bool, default False

If True, change the current instance. Otherwise, return a new one

type = 'crossspectrum'

Make a cross spectrum from a (binned) light curve. You can also make an empty Crossspectrum object to populate with your own Fourier-transformed data (this can sometimes be useful when making binned power spectra). Stingray uses the scipy.fft standards for the sign of the Nyquist frequency.

Parameters:
data1: :class:`stingray.Lightcurve` or :class:`stingray.events.EventList`, optional, default ``None``

The dataset for the first channel/band of interest.

data2: :class:`stingray.Lightcurve` or :class:`stingray.events.EventList`, optional, default ``None``

The dataset for the second, or “reference”, band.

norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``

The normalization of the (real part of the) cross spectrum.

power_type: string, optional, default ``real``

Parameter to choose among complete, real part and magnitude of the cross spectrum.

fullspec: boolean, optional, default ``False``

If False, keep only the positive frequencies, or if True, keep all of them .

Other Parameters:
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input Lightcurve GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the Lightcurve objects before making the cross spectrum.

lc1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects

For backwards compatibility only. Like data1, but no stingray.events.EventList objects allowed

lc2: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects

For backwards compatibility only. Like data2, but no stingray.events.EventList objects allowed

dt: float

The time resolution of the light curve. Only needed when constructing light curves in the case where data1, data2 are EventList objects

skip_checks: bool

Skip initial checks, for speed or other reasons (you need to trust your inputs!)

Attributes:
freq: numpy.ndarray

The array of mid-bin frequencies that the Fourier transform samples

power: numpy.ndarray

The array of cross spectra (complex numbers)

power_err: numpy.ndarray

The uncertainties of power. An approximation for each bin given by power_err= power/sqrt(m). Where m is the number of power averaged in each bin (by frequency binning, or averaging more than one spectra). Note that for a single realization (m=1) the error is equal to the power.

df: float

The frequency resolution

m: int

The number of averaged cross-spectra amplitudes in each bin.

n: int

The number of data points/time bins in one segment of the light curves.

k: array of int

The rebinning scheme if the object has been rebinned otherwise is set to 1.

nphots1: float

The total number of photons in light curve 1

nphots2: float

The total number of photons in light curve 2


Coherence

Convenience function to compute the coherence between two stingray.Lightcurve objects.

stingray.coherence(lc1, lc2)[source]

Estimate coherence function of two light curves. For details on the definition of the coherence, see Vaughan and Nowak, 1996 [2].

Parameters:
lc1: :class:`stingray.Lightcurve` object

The first light curve data for the channel of interest.

lc2: :class:`stingray.Lightcurve` object

The light curve data for reference band

Returns:
cohnp.ndarray

The array of coherence versus frequency

References


Powerspectrum

class stingray.Powerspectrum(data=None, norm='frac', gti=None, dt=None, lc=None, skip_checks=False)[source]
_initialize_empty()[source]

Set all attributes to None.

_initialize_from_any_input(data, dt=None, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)[source]

Initialize the class, trying to understand the input types.

The input arguments are the same as __init__(). Based on the type of data, this method will call the appropriate powerspectrum_from_XXXX function, and initialize self with the correct attributes.

_normalize_crossspectrum(unnorm_power)

Normalize the real part of the cross spectrum to Leahy, absolute rms^2, fractional rms^2 normalization, or not at all.

Parameters:
unnorm_power: numpy.ndarray

The unnormalized cross spectrum.

Returns:
power: numpy.nd.array

The normalized co-spectrum (real part of the cross spectrum). For ‘none’ normalization, imaginary part is returned as well.

_operation_with_other_obj(other, operation, operated_attrs=None, error_attrs=None, error_operation=None, inplace=False)

Helper method to codify an operation of one time series with another (e.g. add, subtract). Takes into account the GTIs, and returns a new StingrayTimeseries object.

Parameters:
otherStingrayTimeseries object

A second time series object

operationfunction

An operation between the StingrayTimeseries object calling this method, and other, operating on all the specified array attributes.

Returns:
ts_newStingrayTimeseries object

The new time series calculated in operation

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

The function used for error propagation. Defaults to the sum of squares.

_rms_error(powers)[source]

Compute the error on the fractional rms amplitude using error propagation. Note: this uses the actual measured powers, which is not strictly correct. We should be using the underlying power spectrum, but in the absence of an estimate of that, this will have to do.

\[r = \sqrt{P}\]
\[\begin{split}\delta r = \\frac{1}{2 * \sqrt{P}} \delta P\end{split}\]
Parameters:
powers: iterable

The list of powers used to compute the fractional rms amplitude.

Returns:
delta_rms: float

The error on the fractional rms amplitude.

add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

classical_significances(threshold=1, trial_correction=False)[source]

Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chi-square distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.

Note that this function will only produce correct results when the following underlying assumptions are fulfilled:

  1. The power spectrum is Leahy-normalized

  2. There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!

  3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time)

By default, the method produces (index,p-values) for all powers in the power spectrum, where index is the numerical index of the power in question. If a threshold is set, then only powers with p-values below that threshold with their respective indices. If trial_correction is set to True, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.

Parameters:
thresholdfloat, optional, default 1

The threshold to be used when reporting p-values of potentially significant powers. Must be between 0 and 1. Default is 1 (all p-values will be reported).

trial_correctionbool, optional, default False

A Boolean flag that sets whether the threshold will be corrected by the number of frequencies before being applied. This decreases the threshold (p-values need to be lower to count as significant). Default is False (report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!

Returns:
pvalsiterable

A list of (p-value, index) tuples for all powers that have p-values lower than the threshold specified in threshold.

coherence()

Compute Coherence function of the cross spectrum.

Coherence is defined in Vaughan and Nowak, 1996 [3]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.

Returns:
cohnumpy.ndarray

Coherence function

References

compute_rms(min_freq, max_freq, poisson_noise_level=None)[source]

Compute the fractional rms amplitude in the power spectrum between two frequencies.

Parameters:
min_freq: float

The lower frequency bound for the calculation.

max_freq: float

The upper frequency bound for the calculation.

Returns:
rms: float

The fractional rms amplitude contained between min_freq and max_freq.

rms_err: float

The error on the fractional rms amplitude.

Other Parameters:
poisson_noise_levelfloat, default is None

This is the Poisson noise level of the PDS with same normalization as the PDS. If poissoin_noise_level is None, the Poisson noise is calculated in the idealcase e.g. 2./<countrate> for fractional rms normalisation Dead time and other instrumental effects can alter it. The user can fit the Poisson noise level outside this function using the same normalisation of the PDS and it will get subtracted from powers here.

data_attributes() list[str]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

deadtime_correct(dead_time, rate, background_rate=0, limit_k=200, n_approx=None, paralyzable=False)

Correct the power spectrum for dead time effects.

This correction is based on the formula given in Zhang et al. 2015, assuming a constant dead time for all events. For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad

Parameters:
dead_time: float

The dead time of the detector.

ratefloat

Detected source count rate

Returns:
spectrum: Crossspectrum or derivative.

The dead-time corrected spectrum.

Other Parameters:
background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

paralyzable: bool, default False

If True, the dead time correction is done assuming a paralyzable dead time. If False, the correction is done assuming a non-paralyzable (more common) dead time.

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

dict() dict

Return a dictionary representation of the object.

classmethod from_astropy_table(ts: Table) Tso

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

static from_events(events, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)[source]

Calculate an average power spectrum from an event list.

Parameters:
eventsstingray.EventList

Event list to be analyzed.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)[source]

Calculate the average power spectrum of an iterable collection of light curves.

Parameters:
iter_lciterable of stingray.Lightcurve objects or np.array

Light curves. If arrays, use them as counts.

dtfloat

The time resolution of the light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_lightcurve(lc, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)[source]

Calculate a power spectrum from a light curve.

Parameters:
eventsstingray.Lightcurve

Light curve to be analyzed.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

classmethod from_pandas(ts: DataFrame) Tso

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

static from_stingray_timeseries(ts, flux_attr, error_flux_attr=None, segment_size=None, norm='none', silent=False, use_common_mean=True, gti=None, save_all=False)[source]

Calculate AveragedPowerspectrum from a time series.

Parameters:
tsstingray.Timeseries

Input Time Series

flux_attrstr

What attribute of the time series will be used.

Other Parameters:
error_flux_attrstr

What attribute of the time series will be used as error bar.

segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_time_array(times, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)[source]

Calculate an average power spectrum from an array of event times.

Parameters:
timesnp.array

Event arrival times.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

classmethod from_xarray(ts: Dataset) Tso

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict

Give a dictionary with all non-None meta attrs of the object.

initial_checks(data1=None, data2=None, norm='frac', gti=None, lc1=None, lc2=None, segment_size=None, power_type='real', dt=None, fullspec=False)

Run initial checks on the input.

Returns True if checks are passed, False if they are not.

Raises various errors for different bad inputs

Examples

>>> times = np.arange(0, 10)
>>> counts = np.random.poisson(100, 10)
>>> lc1 = Lightcurve(times, counts, skip_checks=True)
>>> lc2 = Lightcurve(times, counts, skip_checks=True)
>>> ev1 = EventList(times)
>>> ev2 = EventList(times)
>>> c = Crossspectrum()
>>> ac = AveragedCrossspectrum()

If norm is not a string, raise a TypeError >>> Crossspectrum.initial_checks(c, norm=1) Traceback (most recent call last): … TypeError: norm must be a string…

If norm is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, norm=”blabla”) Traceback (most recent call last): … ValueError: norm must be ‘frac’…

If power_type is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, power_type=”blabla”) Traceback (most recent call last): … ValueError: power_type not recognized!

If the user passes only one light curve, raise a ValueError

>>> Crossspectrum.initial_checks(c, data1=lc1, data2=None)
Traceback (most recent call last):
...
ValueError: You can't do a cross spectrum...

If the user passes an event list without dt, raise a ValueError

>>> Crossspectrum.initial_checks(c, data1=ev1, data2=ev2, dt=None)
Traceback (most recent call last):
...
ValueError: If using event lists, please specify...
internal_array_attrs() list[str]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

meta_attrs() list[str]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

modulation_upper_limit(fmin=None, fmax=None, c=0.95)[source]

Upper limit on a sinusoidal modulation.

To understand the meaning of this amplitude: if the modulation is described by:

..math:: p = overline{p} (1 + a * sin(x))

this function returns a.

If it is a sum of sinusoidal harmonics instead ..math:: p = overline{p} (1 + sum_l a_l * sin(lx)) a is equivalent to \(\sqrt(\sum_l a_l^2)\).

See stingray.stats.power_upper_limit, stingray.stats.amplitude_upper_limit for more information.

The formula used to calculate the upper limit assumes the Leahy normalization. If the periodogram is in another normalization, we will internally convert it to Leahy before calculating the upper limit.

Parameters:
fmin: float

The minimum frequency to search (defaults to the first nonzero bin)

fmax: float

The maximum frequency to search (defaults to the Nyquist frequency)

Returns:
a: float

The modulation amplitude that could produce P>pmeas with 1 - c probability.

Other Parameters:
c: float

The confidence value for the upper limit (e.g. 0.95 = 95%)

Examples

>>> pds = Powerspectrum()
>>> pds.norm = "leahy"
>>> pds.freq = np.arange(0., 5.)
>>> # Note: this pds has 40 as maximum value between 2 and 5 Hz
>>> pds.power = np.array([100000, 1, 1, 40, 1])
>>> pds.m = 1
>>> pds.nphots = 30000
>>> assert np.isclose(
...     pds.modulation_upper_limit(fmin=2, fmax=5, c=0.99),
...     0.10164,
...     atol=0.0001)
phase_lag()

Calculate the fourier phase lag of the cross spectrum.

This is defined as the argument of the complex cross spectrum, and gives the delay at all frequencies, in cycles, of one input light curve with respect to the other.

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)

Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

rebin(df=None, f=None, method='mean')[source]

Rebin the power spectrum.

Parameters:
df: float

The new frequency resolution.

Returns:
bin_cs = Powerspectrum object

The newly binned power spectrum.

Other Parameters:
f: float

The rebin factor. If specified, it substitutes df with f*self.df, so f>1 is recommended.

rebin_log(f=0.01)

Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:

\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``

parameter that steers the frequency resolution

Returns:
new_specCrossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

time_lag()

Calculate the fourier time lag of the cross spectrum. The time lag is calculated by taking the phase lag \(\phi\) and

..math:

\tau = \frac{\phi}{\two pi \nu}

where \(\nu\) is the center of the frequency bins.

to_astropy_table(no_longdouble=False) Table

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_norm(norm, inplace=False)

Convert Cross spectrum to new normalization.

Parameters:
normstr

The new normalization of the spectrum

Returns:
new_specobject, same class as input

The new, normalized, spectrum.

Other Parameters:
inplace: bool, default False

If True, change the current instance. Otherwise, return a new one

to_pandas() DataFrame

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

type = 'powerspectrum'

Make a Powerspectrum (also called periodogram) from a (binned) light curve. Periodograms can be normalized by either Leahy normalization, fractional rms normalization, absolute rms normalization, or not at all.

You can also make an empty Powerspectrum object to populate with your own fourier-transformed data (this can sometimes be useful when making binned power spectra).

Parameters:
data: :class:`stingray.Lightcurve` object, optional, default ``None``

The light curve data to be Fourier-transformed.

norm: {“leahy” | “frac” | “abs” | “none” }, optional, default “frac”

The normaliation of the power spectrum to be used. Options are “leahy”, “frac”, “abs” and “none”, default is “frac”.

Other Parameters:
gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] – Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care, especially if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input object before making the power spectrum.

skip_checks: bool

Skip initial checks, for speed or other reasons (you need to trust your inputs!).

Attributes:
norm: {“leahy” | “frac” | “abs” | “none” }

The normalization of the power spectrum.

freq: numpy.ndarray

The array of mid-bin frequencies that the Fourier transform samples.

power: numpy.ndarray

The array of normalized squared absolute values of Fourier amplitudes.

power_err: numpy.ndarray

The uncertainties of power. An approximation for each bin given by power_err= power/sqrt(m). Where m is the number of power averaged in each bin (by frequency binning, or averaging power spectra of segments of a light curve). Note that for a single realization (m=1) the error is equal to the power.

unnorm_power: numpy.ndarray

The array of unnormalized powers

unnorm_power_err: numpy.ndarray

The uncertainties of unnorm_power.

df: float

The frequency resolution.

m: int

The number of averaged powers in each bin.

n: int

The number of data points in the light curve.

nphots: float

The total number of photons in the light curve.

write(filename: str, fmt: str | None = None) None

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


AveragedCrossspectrum

class stingray.AveragedCrossspectrum(data1=None, data2=None, segment_size=None, norm='frac', gti=None, power_type='all', silent=False, lc1=None, lc2=None, dt=None, fullspec=False, save_all=False, use_common_mean=True, skip_checks=False)[source]
add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

classical_significances(threshold=1, trial_correction=False)

Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chi-square distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.

Note that this function will only produce correct results when the following underlying assumptions are fulfilled:

  1. The power spectrum is Leahy-normalized

  2. There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!

  3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time)

By default, the method produces (index,p-values) for all powers in the power spectrum, where index is the numerical index of the power in question. If a threshold is set, then only powers with p-values below that threshold with their respective indices. If trial_correction is set to True, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.

Parameters:
thresholdfloat, optional, default 1

The threshold to be used when reporting p-values of potentially significant powers. Must be between 0 and 1. Default is 1 (all p-values will be reported).

trial_correctionbool, optional, default False

A Boolean flag that sets whether the threshold will be corrected by the number of frequencies before being applied. This decreases the threshold (p-values need to be lower to count as significant). Default is False (report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!

Returns:
pvalsiterable

A list of (index, p-value) tuples for all powers that have p-values lower than the threshold specified in threshold.

coherence()[source]

Averaged Coherence function.

Coherence is defined in Vaughan and Nowak, 1996 [4]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.

Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.

Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.

Returns:
(coh, uncertainty)tuple of np.ndarray

Tuple comprising the coherence function and uncertainty.

References

data_attributes() list[str]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

deadtime_correct(dead_time, rate, background_rate=0, limit_k=200, n_approx=None, paralyzable=False)

Correct the power spectrum for dead time effects.

This correction is based on the formula given in Zhang et al. 2015, assuming a constant dead time for all events. For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad

Parameters:
dead_time: float

The dead time of the detector.

ratefloat

Detected source count rate

Returns:
spectrum: Crossspectrum or derivative.

The dead-time corrected spectrum.

Other Parameters:
background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

paralyzable: bool, default False

If True, the dead time correction is done assuming a paralyzable dead time. If False, the correction is done assuming a non-paralyzable (more common) dead time.

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

dict() dict

Return a dictionary representation of the object.

classmethod from_astropy_table(ts: Table) Tso

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

static from_events(events1, events2, dt, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two event lists

Parameters:
events1stingray.EventList

Events from channel 1

events2stingray.EventList

Events from channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_lc_iterable(iter_lc1, iter_lc2, dt, segment_size, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 1. If arrays, use them as counts

iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 2. If arrays, use them as counts

dtfloat

The time resolution of the light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

save_allbool, default False

If True, save the cross spectrum of each segment in the cs_all attribute of the output Crossspectrum object.

static from_lightcurve(lc1, lc2, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
lc1stingray.Lightcurve

Light curve from channel 1

lc2stingray.Lightcurve

Light curve from channel 2

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

classmethod from_pandas(ts: DataFrame) Tso

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

static from_stingray_timeseries(ts1, ts2, flux_attr, error_flux_attr=None, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
ts1stingray.Timeseries

Time series from channel 1

ts2stingray.Timeseries

Time series from channel 2

flux_attrstr

What attribute of the time series will be used.

Other Parameters:
error_flux_attrstr

What attribute of the time series will be used as error bar.

segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_time_array(times1, times2, dt, segment_size=None, gti=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True)

Calculate AveragedCrossspectrum from two arrays of event times.

Parameters:
times1np.array

Event arrival times of channel 1

times2np.array

Event arrival times of channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum.

gti[[gti0, gti1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

classmethod from_xarray(ts: Dataset) Tso

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict

Give a dictionary with all non-None meta attrs of the object.

initial_checks(data1, segment_size=None, **kwargs)[source]

Examples

>>> times = np.arange(0, 10)
>>> ev1 = EventList(times)
>>> ev2 = EventList(times)
>>> ac = AveragedCrossspectrum()

If AveragedCrossspectrum, you need segment_size >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1) Traceback (most recent call last): … ValueError: segment_size must be specified…

And it needs to be finite! >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan) Traceback (most recent call last): … ValueError: segment_size must be finite!

internal_array_attrs() list[str]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

meta_attrs() list[str]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

phase_lag()[source]

Return the fourier phase lag of the cross spectrum.

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)

Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

rebin(df=None, f=None, method='mean')

Rebin the cross spectrum to a new frequency resolution df.

Parameters:
df: float

The new frequency resolution

Returns:
bin_cs = Crossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class AveragedPowerspectrum, too.

Other Parameters:
f: float

the rebin factor. If specified, it substitutes df with f*self.df

rebin_log(f=0.01)

Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:

\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``

parameter that steers the frequency resolution

Returns:
new_specCrossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

time_lag()[source]

Calculate time lag and uncertainty.

Equation from Bendat & Piersol, 2011 [bendat-2011]__.

Returns:
lagnp.ndarray

The time lag

lag_errnp.ndarray

The uncertainty in the time lag

to_astropy_table(no_longdouble=False) Table

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_norm(norm, inplace=False)

Convert Cross spectrum to new normalization.

Parameters:
normstr

The new normalization of the spectrum

Returns:
new_specobject, same class as input

The new, normalized, spectrum.

Other Parameters:
inplace: bool, default False

If True, change the current instance. Otherwise, return a new one

to_pandas() DataFrame

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

type = 'crossspectrum'

Make an averaged cross spectrum from a light curve by segmenting two light curves, Fourier-transforming each segment and then averaging the resulting cross spectra.

Parameters:
data1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object

A light curve from which to compute the cross spectrum. In some cases, this would be the light curve of the wavelength/energy/frequency band of interest.

data2: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object

A second light curve to use in the cross spectrum. In some cases, this would be the wavelength/energy/frequency reference band to compare the band of interest with.

segment_size: float

The size of each segment to average. Note that if the total duration of each Lightcurve object in lc1 or lc2 is not an integer multiple of the segment_size, then any fraction left-over at the end of the time series will be lost. Otherwise you introduce artifacts.

norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``

The normalization of the (real part of the) cross spectrum.

Other Parameters:
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

dtfloat

The time resolution of the light curve. Only needed when constructing light curves in the case where data1 or data2 are of :class:EventList

power_type: string, optional, default ``all``

Parameter to choose among complete, real part and magnitude of the cross spectrum.

silentbool, default False

Do not show a progress bar when generating an averaged cross spectrum. Useful for the batch execution of many spectra

lc1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects

For backwards compatibility only. Like data1, but no stingray.events.EventList objects allowed

lc2: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects

For backwards compatibility only. Like data2, but no stingray.events.EventList objects allowed

fullspec: boolean, optional, default ``False``

If True, return the full array of frequencies, otherwise return just the positive frequencies.

save_allbool, default False

Save all intermediate PDSs used for the final average. Use with care. This is likely to fill up your RAM on medium-sized datasets, and to slow down the computation when rebinning.

skip_checks: bool

Skip initial checks, for speed or other reasons (you need to trust your inputs!)

use_common_mean: bool

Averaged cross spectra are normalized in two possible ways: one is by normalizing each of the single spectra that get averaged, the other is by normalizing after the averaging. If use_common_mean is selected, the spectrum will be normalized after the average.

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

Attributes:
freq: numpy.ndarray

The array of mid-bin frequencies that the Fourier transform samples.

power: numpy.ndarray

The array of cross spectra.

power_err: numpy.ndarray

The uncertainties of power. An approximation for each bin given by power_err= power/sqrt(m). Where m is the number of power averaged in each bin (by frequency binning, or averaging power spectra of segments of a light curve). Note that for a single realization (m=1) the error is equal to the power.

df: float

The frequency resolution.

m: int

The number of averaged cross spectra.

n: int

The number of time bins per segment of light curve.

nphots1: float

The total number of photons in the first (interest) light curve.

nphots2: float

The total number of photons in the second (reference) light curve.

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals.

write(filename: str, fmt: str | None = None) None

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


AveragedPowerspectrum

class stingray.AveragedPowerspectrum(data=None, segment_size=None, norm='frac', gti=None, silent=False, dt=None, lc=None, large_data=False, save_all=False, skip_checks=False, use_common_mean=True)[source]
add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

classical_significances(threshold=1, trial_correction=False)

Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chi-square distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.

Note that this function will only produce correct results when the following underlying assumptions are fulfilled:

  1. The power spectrum is Leahy-normalized

  2. There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!

  3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time)

By default, the method produces (index,p-values) for all powers in the power spectrum, where index is the numerical index of the power in question. If a threshold is set, then only powers with p-values below that threshold with their respective indices. If trial_correction is set to True, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.

Parameters:
thresholdfloat, optional, default 1

The threshold to be used when reporting p-values of potentially significant powers. Must be between 0 and 1. Default is 1 (all p-values will be reported).

trial_correctionbool, optional, default False

A Boolean flag that sets whether the threshold will be corrected by the number of frequencies before being applied. This decreases the threshold (p-values need to be lower to count as significant). Default is False (report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!

Returns:
pvalsiterable

A list of (p-value, index) tuples for all powers that have p-values lower than the threshold specified in threshold.

coherence()

Averaged Coherence function.

Coherence is defined in Vaughan and Nowak, 1996 [5]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.

Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.

Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.

Returns:
(coh, uncertainty)tuple of np.ndarray

Tuple comprising the coherence function and uncertainty.

References

compute_rms(min_freq, max_freq, poisson_noise_level=None)

Compute the fractional rms amplitude in the power spectrum between two frequencies.

Parameters:
min_freq: float

The lower frequency bound for the calculation.

max_freq: float

The upper frequency bound for the calculation.

Returns:
rms: float

The fractional rms amplitude contained between min_freq and max_freq.

rms_err: float

The error on the fractional rms amplitude.

Other Parameters:
poisson_noise_levelfloat, default is None

This is the Poisson noise level of the PDS with same normalization as the PDS. If poissoin_noise_level is None, the Poisson noise is calculated in the idealcase e.g. 2./<countrate> for fractional rms normalisation Dead time and other instrumental effects can alter it. The user can fit the Poisson noise level outside this function using the same normalisation of the PDS and it will get subtracted from powers here.

data_attributes() list[str]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

deadtime_correct(dead_time, rate, background_rate=0, limit_k=200, n_approx=None, paralyzable=False)

Correct the power spectrum for dead time effects.

This correction is based on the formula given in Zhang et al. 2015, assuming a constant dead time for all events. For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad

Parameters:
dead_time: float

The dead time of the detector.

ratefloat

Detected source count rate

Returns:
spectrum: Crossspectrum or derivative.

The dead-time corrected spectrum.

Other Parameters:
background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

paralyzable: bool, default False

If True, the dead time correction is done assuming a paralyzable dead time. If False, the correction is done assuming a non-paralyzable (more common) dead time.

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

dict() dict

Return a dictionary representation of the object.

classmethod from_astropy_table(ts: Table) Tso

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

static from_events(events, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)

Calculate an average power spectrum from an event list.

Parameters:
eventsstingray.EventList

Event list to be analyzed.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)

Calculate the average power spectrum of an iterable collection of light curves.

Parameters:
iter_lciterable of stingray.Lightcurve objects or np.array

Light curves. If arrays, use them as counts.

dtfloat

The time resolution of the light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_lightcurve(lc, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)

Calculate a power spectrum from a light curve.

Parameters:
eventsstingray.Lightcurve

Light curve to be analyzed.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

classmethod from_pandas(ts: DataFrame) Tso

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

static from_stingray_timeseries(ts, flux_attr, error_flux_attr=None, segment_size=None, norm='none', silent=False, use_common_mean=True, gti=None, save_all=False)

Calculate AveragedPowerspectrum from a time series.

Parameters:
tsstingray.Timeseries

Input Time Series

flux_attrstr

What attribute of the time series will be used.

Other Parameters:
error_flux_attrstr

What attribute of the time series will be used as error bar.

segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_time_array(times, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)

Calculate an average power spectrum from an array of event times.

Parameters:
timesnp.array

Event arrival times.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

classmethod from_xarray(ts: Dataset) Tso

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict

Give a dictionary with all non-None meta attrs of the object.

initial_checks(*args, **kwargs)[source]

Examples

>>> times = np.arange(0, 10)
>>> ev1 = EventList(times)
>>> ev2 = EventList(times)
>>> ac = AveragedCrossspectrum()

If AveragedCrossspectrum, you need segment_size >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1) Traceback (most recent call last): … ValueError: segment_size must be specified…

And it needs to be finite! >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan) Traceback (most recent call last): … ValueError: segment_size must be finite!

internal_array_attrs() list[str]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

meta_attrs() list[str]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

modulation_upper_limit(fmin=None, fmax=None, c=0.95)

Upper limit on a sinusoidal modulation.

To understand the meaning of this amplitude: if the modulation is described by:

..math:: p = overline{p} (1 + a * sin(x))

this function returns a.

If it is a sum of sinusoidal harmonics instead ..math:: p = overline{p} (1 + sum_l a_l * sin(lx)) a is equivalent to \(\sqrt(\sum_l a_l^2)\).

See stingray.stats.power_upper_limit, stingray.stats.amplitude_upper_limit for more information.

The formula used to calculate the upper limit assumes the Leahy normalization. If the periodogram is in another normalization, we will internally convert it to Leahy before calculating the upper limit.

Parameters:
fmin: float

The minimum frequency to search (defaults to the first nonzero bin)

fmax: float

The maximum frequency to search (defaults to the Nyquist frequency)

Returns:
a: float

The modulation amplitude that could produce P>pmeas with 1 - c probability.

Other Parameters:
c: float

The confidence value for the upper limit (e.g. 0.95 = 95%)

Examples

>>> pds = Powerspectrum()
>>> pds.norm = "leahy"
>>> pds.freq = np.arange(0., 5.)
>>> # Note: this pds has 40 as maximum value between 2 and 5 Hz
>>> pds.power = np.array([100000, 1, 1, 40, 1])
>>> pds.m = 1
>>> pds.nphots = 30000
>>> assert np.isclose(
...     pds.modulation_upper_limit(fmin=2, fmax=5, c=0.99),
...     0.10164,
...     atol=0.0001)
phase_lag()

Return the fourier phase lag of the cross spectrum.

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)

Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

rebin(df=None, f=None, method='mean')

Rebin the power spectrum.

Parameters:
df: float

The new frequency resolution.

Returns:
bin_cs = Powerspectrum object

The newly binned power spectrum.

Other Parameters:
f: float

The rebin factor. If specified, it substitutes df with f*self.df, so f>1 is recommended.

rebin_log(f=0.01)

Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:

\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``

parameter that steers the frequency resolution

Returns:
new_specCrossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

time_lag()

Calculate time lag and uncertainty.

Equation from Bendat & Piersol, 2011 [bendat-2011]__.

Returns:
lagnp.ndarray

The time lag

lag_errnp.ndarray

The uncertainty in the time lag

to_astropy_table(no_longdouble=False) Table

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_norm(norm, inplace=False)

Convert Cross spectrum to new normalization.

Parameters:
normstr

The new normalization of the spectrum

Returns:
new_specobject, same class as input

The new, normalized, spectrum.

Other Parameters:
inplace: bool, default False

If True, change the current instance. Otherwise, return a new one

to_pandas() DataFrame

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

type = 'powerspectrum'

Make an averaged periodogram from a light curve by segmenting the light curve, Fourier-transforming each segment and then averaging the resulting periodograms.

Parameters:
data: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object

The light curve data to be Fourier-transformed.

segment_size: float

The size of each segment to average. Note that if the total duration of each Lightcurve object in lc is not an integer multiple of the segment_size, then any fraction left-over at the end of the time series will be lost.

norm: {“leahy” | “frac” | “abs” | “none” }, optional, default “frac”

The normalization of the periodogram to be used.

Other Parameters:
gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] – Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care, especially if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input object before making the power spectrum.

silentbool, default False

Do not show a progress bar when generating an averaged cross spectrum. Useful for the batch execution of many spectra.

dt: float

The time resolution of the light curve. Only needed when constructing light curves in the case where data is of :class:EventList.

save_allbool, default False

Save all intermediate PDSs used for the final average. Use with care. This is likely to fill up your RAM on medium-sized datasets, and to slow down the computation when rebinning.

skip_checks: bool

Skip initial checks, for speed or other reasons (you need to trust your inputs!).

Attributes:
norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` }

The normalization of the periodogram.

freq: numpy.ndarray

The array of mid-bin frequencies that the Fourier transform samples.

power: numpy.ndarray

The array of normalized powers

power_err: numpy.ndarray

The uncertainties of power. An approximation for each bin given by power_err= power/sqrt(m). Where m is the number of power averaged in each bin (by frequency binning, or averaging power spectra of segments of a light curve). Note that for a single realization (m=1) the error is equal to the power.

unnorm_power: numpy.ndarray

The array of unnormalized powers

unnorm_power_err: numpy.ndarray

The uncertainties of unnorm_power.

df: float

The frequency resolution.

m: int

The number of averaged periodograms.

n: int

The number of data points in the light curve.

nphots: float

The total number of photons in the light curve.

cs_all: list of :class:`Powerspectrum` objects

The list of all periodograms used to calculate the average periodogram.

write(filename: str, fmt: str | None = None) None

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


Dynamical Powerspectrum

class stingray.DynamicalPowerspectrum(lc, segment_size, norm='frac', gti=None, sample_time=None)[source]
add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

classical_significances(threshold=1, trial_correction=False)

Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chi-square distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.

Note that this function will only produce correct results when the following underlying assumptions are fulfilled:

  1. The power spectrum is Leahy-normalized

  2. There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!

  3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time)

By default, the method produces (index,p-values) for all powers in the power spectrum, where index is the numerical index of the power in question. If a threshold is set, then only powers with p-values below that threshold with their respective indices. If trial_correction is set to True, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.

Parameters:
thresholdfloat, optional, default 1

The threshold to be used when reporting p-values of potentially significant powers. Must be between 0 and 1. Default is 1 (all p-values will be reported).

trial_correctionbool, optional, default False

A Boolean flag that sets whether the threshold will be corrected by the number of frequencies before being applied. This decreases the threshold (p-values need to be lower to count as significant). Default is False (report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!

Returns:
pvalsiterable

A list of (index, p-value) tuples for all powers that have p-values lower than the threshold specified in threshold.

coherence()

Averaged Coherence function.

Coherence is defined in Vaughan and Nowak, 1996 [6]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.

Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.

Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.

Returns:
(coh, uncertainty)tuple of np.ndarray

Tuple comprising the coherence function and uncertainty.

References

compute_rms(min_freq, max_freq, poisson_noise_level=0)

Compute the fractional rms amplitude in the power spectrum between two frequencies.

Parameters:
min_freq: float

The lower frequency bound for the calculation.

max_freq: float

The upper frequency bound for the calculation.

Returns:
rms: float

The fractional rms amplitude contained between min_freq and max_freq.

rms_err: float

The error on the fractional rms amplitude.

Other Parameters:
poisson_noise_levelfloat, default is None

This is the Poisson noise level of the PDS with same normalization as the PDS.

data_attributes() list[str]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

deadtime_correct(dead_time, rate, background_rate=0, limit_k=200, n_approx=None, paralyzable=False)

Correct the power spectrum for dead time effects.

This correction is based on the formula given in Zhang et al. 2015, assuming a constant dead time for all events. For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad

Parameters:
dead_time: float

The dead time of the detector.

ratefloat

Detected source count rate

Returns:
spectrum: Crossspectrum or derivative.

The dead-time corrected spectrum.

Other Parameters:
background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

paralyzable: bool, default False

If True, the dead time correction is done assuming a paralyzable dead time. If False, the correction is done assuming a non-paralyzable (more common) dead time.

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

dict() dict

Return a dictionary representation of the object.

classmethod from_astropy_table(ts: Table) Tso

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

static from_events(events1, events2, dt, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two event lists

Parameters:
events1stingray.EventList

Events from channel 1

events2stingray.EventList

Events from channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_lc_iterable(iter_lc1, iter_lc2, dt, segment_size, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 1. If arrays, use them as counts

iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 2. If arrays, use them as counts

dtfloat

The time resolution of the light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

save_allbool, default False

If True, save the cross spectrum of each segment in the cs_all attribute of the output Crossspectrum object.

static from_lightcurve(lc1, lc2, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
lc1stingray.Lightcurve

Light curve from channel 1

lc2stingray.Lightcurve

Light curve from channel 2

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

classmethod from_pandas(ts: DataFrame) Tso

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

static from_stingray_timeseries(ts1, ts2, flux_attr, error_flux_attr=None, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
ts1stingray.Timeseries

Time series from channel 1

ts2stingray.Timeseries

Time series from channel 2

flux_attrstr

What attribute of the time series will be used.

Other Parameters:
error_flux_attrstr

What attribute of the time series will be used as error bar.

segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_time_array(times1, times2, dt, segment_size=None, gti=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True)

Calculate AveragedCrossspectrum from two arrays of event times.

Parameters:
times1np.array

Event arrival times of channel 1

times2np.array

Event arrival times of channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum.

gti[[gti0, gti1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

classmethod from_xarray(ts: Dataset) Tso

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict

Give a dictionary with all non-None meta attrs of the object.

initial_checks(data1, segment_size=None, **kwargs)

Examples

>>> times = np.arange(0, 10)
>>> ev1 = EventList(times)
>>> ev2 = EventList(times)
>>> ac = AveragedCrossspectrum()

If AveragedCrossspectrum, you need segment_size >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1) Traceback (most recent call last): … ValueError: segment_size must be specified…

And it needs to be finite! >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan) Traceback (most recent call last): … ValueError: segment_size must be finite!

internal_array_attrs() list[str]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

meta_attrs() list[str]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

phase_lag()

Return the fourier phase lag of the cross spectrum.

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)

Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.

power_colors(freq_edges=[0.00390625, 0.03125, 0.25, 2.0, 16.0], freqs_to_exclude=None, poisson_power=None)[source]

Return the power colors of the dynamical power spectrum.

Parameters:
freq_edges: iterable

The edges of the frequency bins to be used for the power colors.

freqs_to_exclude1-d or 2-d iterable, optional, default None

The ranges of frequencies to exclude from the calculation of the power color. For example, the frequencies containing strong QPOs. A 1-d iterable should contain two values for the edges of a single range. (E.g. [0.1, 0.2]). [[0.1, 0.2], [3, 4]] will exclude the ranges 0.1-0.2 Hz and 3-4 Hz.

poisson_levelfloat or iterable, optional

Defaults to the theoretical Poisson noise level (e.g. 2 for Leahy normalization). The Poisson noise level of the power spectrum. If iterable, it should have the same length as frequency. (This might apply to the case of a power spectrum with a strong dead time distortion)

Returns:
pc0: np.ndarray
pc0_err: np.ndarray
pc1: np.ndarray
pc1_err: np.ndarray

The power colors for each spectrum and their respective errors

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

rebin(df=None, f=None, method='mean')

Rebin the cross spectrum to a new frequency resolution df.

Parameters:
df: float

The new frequency resolution

Returns:
bin_cs = Crossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class AveragedPowerspectrum, too.

Other Parameters:
f: float

the rebin factor. If specified, it substitutes df with f*self.df

rebin_by_n_intervals(n, method='average')

Rebin the Dynamic Power Spectrum to a new time resolution, by summing contiguous intervals.

This is different from meth:DynamicalPowerspectrum.rebin_time in that it averages n consecutive intervals regardless of their distance in time. rebin_time will instead average intervals that are separated at most by a time dt_new.

Parameters:
n: int

The number of intervals to be combined into one.

method: {“sum” | “mean” | “average”}, optional, default “average”

This keyword argument sets whether the powers in the new bins should be summed or averaged.

Returns:
time_new: numpy.ndarray

Time axis with new rebinned time resolution.

dynspec_new: numpy.ndarray

New rebinned Dynamical Cross Spectrum.

rebin_frequency(df_new, method='average')

Rebin the Dynamic Power Spectrum to a new frequency resolution. Rebinning is an in-place operation, i.e. will replace the existing dyn_ps attribute.

While the new resolution does not need to be an integer of the previous frequency resolution, be aware that if this is the case, the last frequency bin will be cut off by the fraction left over by the integer division

Parameters:
df_new: float

The new frequency resolution of the dynamical power spectrum. Must be larger than the frequency resolution of the old dynamical power spectrum!

method: {“sum” | “mean” | “average”}, optional, default “average”

This keyword argument sets whether the powers in the new bins should be summed or averaged.

rebin_log(f=0.01)

Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:

\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``

parameter that steers the frequency resolution

Returns:
new_specCrossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class

rebin_time(dt_new, method='average')

Rebin the Dynamic Power Spectrum to a new time resolution.

Note: this is not changing the time resolution of the input light curve! dt is the integration time of each line of the dynamical power spectrum (typically, an integer multiple of segment_size).

While the new resolution does not need to be an integer of the previous time resolution, be aware that if this is the case, the last time bin will be cut off by the fraction left over by the integer division

Parameters:
dt_new: float

The new time resolution of the dynamical power spectrum. Must be larger than the time resolution of the old dynamical power spectrum!

method: {“sum” | “mean” | “average”}, optional, default “average”

This keyword argument sets whether the powers in the new bins should be summed or averaged.

Returns:
time_new: numpy.ndarray

Time axis with new rebinned time resolution.

dynspec_new: numpy.ndarray

New rebinned Dynamical Cross Spectrum.

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

time_lag()

Calculate time lag and uncertainty.

Equation from Bendat & Piersol, 2011 [bendat-2011]__.

Returns:
lagnp.ndarray

The time lag

lag_errnp.ndarray

The uncertainty in the time lag

to_astropy_table(no_longdouble=False) Table

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_norm(norm, inplace=False)

Convert Cross spectrum to new normalization.

Parameters:
normstr

The new normalization of the spectrum

Returns:
new_specobject, same class as input

The new, normalized, spectrum.

Other Parameters:
inplace: bool, default False

If True, change the current instance. Otherwise, return a new one

to_pandas() DataFrame

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

trace_maximum(min_freq=None, max_freq=None)

Return the indices of the maximum powers in each segment Powerspectrum between specified frequencies.

Parameters:
min_freq: float, default ``None``

The lower frequency bound.

max_freq: float, default ``None``

The upper frequency bound.

Returns:
max_positionsnp.array

The array of indices of the maximum power in each segment having frequency between min_freq and max_freq.

type = 'powerspectrum'

Create a dynamical power spectrum, also often called a spectrogram.

This class will divide a Lightcurve object into segments of length segment_size, create a power spectrum for each segment and store all powers in a matrix as a function of both time (using the mid-point of each segment) and frequency.

This is often used to trace changes in period of a (quasi-)periodic signal over time.

Parameters:
lcstingray.Lightcurve or stingray.EventList object

The time series or event list of which the dynamical power spectrum is to be calculated. If stingray.EventList, dt must be specified as well.

segment_sizefloat, default 1

Length of the segment of light curve, default value is 1 (in whatever units the time array in the Lightcurve` object uses).

norm: {“leahy” | “frac” | “abs” | “none” }, optional, default “frac”

The normaliation of the periodogram to be used.

Other Parameters:
gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] – Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care, especially if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input object before making the power spectrum.

sample_time: float

Compulsory for input stingray.EventList data. The time resolution of the lightcurve that is created internally from the input event lists. Drives the Nyquist frequency.

Attributes:
segment_size: float

The size of each segment to average. Note that if the total duration of each input object in lc is not an integer multiple of the segment_size, then any fraction left-over at the end of the time series will be lost.

dyn_psnp.ndarray

The matrix of normalized squared absolute values of Fourier amplitudes. The axis are given by the freq and time attributes.

norm: {``leahy`` | ``frac`` | ``abs`` | ``none``}

The normalization of the periodogram.

freq: numpy.ndarray

The array of mid-bin frequencies that the Fourier transform samples.

time: numpy.ndarray

The array of mid-point times of each interval used for the dynamical power spectrum.

df: float

The frequency resolution.

dt: float

The time resolution of the dynamical spectrum. It is not the time resolution of the input light curve. It is the integration time of each line of the dynamical power spectrum (typically, an integer multiple of segment_size).

m: int

The number of averaged cross spectra.

write(filename: str, fmt: str | None = None) None

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


CrossCorrelation

class stingray.CrossCorrelation(lc1=None, lc2=None, cross=None, mode='same', norm='none')[source]

Make a cross-correlation from light curves or a cross spectrum.

You can also make an empty Crosscorrelation object to populate with your own cross-correlation data.

Parameters:
lc1: :class:`stingray.Lightcurve` object, optional, default ``None``

The first light curve data for correlation calculations.

lc2: :class:`stingray.Lightcurve` object, optional, default ``None``

The light curve data for the correlation calculations.

cross: :class: `stingray.Crossspectrum` object, default ``None``

The cross spectrum data for the correlation calculations.

mode: {``full``, ``valid``, ``same``}, optional, default ``same``

A string indicating the size of the correlation output. See the relevant scipy documentation [scipy-docs] for more details.

norm: {``none``, ``variance``}

if “variance”, the cross correlation is normalized so that perfect correlation gives 1, and perfect anticorrelation gives -1. See Gaskell & Peterson 1987, Gardner & Done 2017

References

Attributes:
lc1: :class:`stingray.Lightcurve`

The first light curve data for correlation calculations.

lc2: :class:`stingray.Lightcurve`

The light curve data for the correlation calculations.

cross: :class: `stingray.Crossspectrum`

The cross spectrum data for the correlation calculations.

corr: numpy.ndarray

An array of correlation data calculated from two light curves

time_lags: numpy.ndarray

An array of all possible time lags against which each point in corr is calculated

dt: float

The time resolution of each light curve (used in time_lag calculations)

time_shift: float

Time lag that gives maximum value of correlation between two light curves. There will be maximum correlation between light curves if one of the light curve is shifted by time_shift.

n: int

Number of points in self.corr (length of cross-correlation data)

auto: bool

An internal flag to indicate whether this is a cross-correlation or an auto-correlation.

norm: {``none``, ``variance``}

The normalization specified in input

cal_timeshift(dt=1.0)[source]

Calculate the cross correlation against all possible time lags, both positive and negative.

The method signal.correlation_lags() uses SciPy versions >= 1.6.1 ([scipy-docs-lag])

Parameters:
dt: float, optional, default ``1.0``

Time resolution of the light curve, should be passed when object is populated with correlation data and no information about light curve can be extracted. Used to calculate time_lags.

Returns:
self.time_shift: float

Value of the time lag that gives maximum value of correlation between two light curves.

self.time_lags: numpy.ndarray

An array of time_lags calculated from correlation data

References

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)[source]

Plot the Crosscorrelation as function using Matplotlib. Plot the Crosscorrelation object on a graph self.time_lags on x-axis and self.corr on y-axis

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for matplotlib.pyplot.axis() function.

titlestr, default None

The title of the plot.

markerstr, default -

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional (default=False)

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.


AutoCorrelation

class stingray.AutoCorrelation(lc=None, mode='same')[source]

Make an auto-correlation from a light curve. You can also make an empty Autocorrelation object to populate with your own auto-correlation data.

Parameters:
lc: :class:`stingray.Lightcurve` object, optional, default ``None``

The light curve data for correlation calculations.

mode: {``full``, ``valid``, ``same``}, optional, default ``same``

A string indicating the size of the correlation output. See the relevant scipy documentation [scipy-docs] for more details.

Attributes:
lc1, lc2::class:`stingray.Lightcurve`

The light curve data for correlation calculations.

corr: numpy.ndarray

An array of correlation data calculated from lightcurve data

time_lags: numpy.ndarray

An array of all possible time lags against which each point in corr is calculated

dt: float

The time resolution of each lightcurve (used in time_lag calculations)

time_shift: float, zero

Max. Value of AutoCorrelation is always at zero lag.

n: int

Number of points in self.corr(Length of auto-correlation data)

cal_timeshift(dt=1.0)

Calculate the cross correlation against all possible time lags, both positive and negative.

The method signal.correlation_lags() uses SciPy versions >= 1.6.1 ([scipy-docs-lag])

Parameters:
dt: float, optional, default ``1.0``

Time resolution of the light curve, should be passed when object is populated with correlation data and no information about light curve can be extracted. Used to calculate time_lags.

Returns:
self.time_shift: float

Value of the time lag that gives maximum value of correlation between two light curves.

self.time_lags: numpy.ndarray

An array of time_lags calculated from correlation data

References

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)

Plot the Crosscorrelation as function using Matplotlib. Plot the Crosscorrelation object on a graph self.time_lags on x-axis and self.corr on y-axis

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for matplotlib.pyplot.axis() function.

titlestr, default None

The title of the plot.

markerstr, default -

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional (default=False)

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.


Dead-Time Corrections

stingray.deadtime.fad.FAD(data1, data2, segment_size, dt=None, norm='frac', plot=False, ax=None, smoothing_alg='gauss', smoothing_length=None, verbose=False, tolerance=0.05, strict=False, output_file=None, return_objects=False)[source]

Calculate Frequency Amplitude Difference-corrected (cross)power spectra.

Reference: Bachetti & Huppenkothen, 2018, ApJ, 853L, 21

The two input light curve must be strictly simultaneous, and recorded by two independent detectors with similar responses, so that the count rates are similar and dead time is independent. The method does not apply to different energy channels of the same instrument, or to the signal observed by two instruments with very different responses. See the paper for caveats.

Parameters:
data1Lightcurve or EventList

Input data for channel 1

data2Lightcurve or EventList

Input data for channel 2. Must be strictly simultaneous to data1 and, if a light curve, have the same binning time. Also, it must be strictly independent, e.g. from a different detector. There must be no dead time cross-talk between the two time series.

segment_size: float

The final Fourier products are averaged over many segments of the input light curves. This is the length of each segment being averaged. Note that the light curve must be long enough to have at least 30 segments, as the result gets better as one averages more and more segments.

dtfloat

Time resolution of the light curves used to produce periodograms

norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``

The normalization of the (real part of the) cross spectrum.

Returns:
resultsclass:astropy.table.Table object or dict or str

The content of results depends on whether return_objects is True or False. If return_objects==False, results is a Table with the following columns:

  • pds1: the corrected PDS of lc1

  • pds2: the corrected PDS of lc2

  • cs: the corrected cospectrum

  • ptot: the corrected PDS of lc1 + lc2

If return_objects is True, results is a dict, with keys named like the columns listed above but with AveragePowerspectrum or AverageCrossspectrum objects instead of arrays.

Other Parameters:
plotbool, default False

Plot diagnostics: check if the smoothed Fourier difference scatter is a good approximation of the data scatter.

axmatplotlib.axes.axes object
If not None and plot is True, use this axis object to produce

the diagnostic plot. Otherwise, create a new figure.

smoothing_alg{‘gauss’, …}

Smoothing algorithm. For now, the only smoothing algorithm allowed is gauss, which applies a Gaussian Filter from scipy.

smoothing_lengthint, default segment_size * 3

Number of bins to smooth in gaussian window smoothing

verbose: bool, default False

Print out information on the outcome of the algorithm (recommended)

tolerancefloat, default 0.05

Accepted relative error on the FAD-corrected Fourier amplitude, to be used as success diagnostics. Should be ` stdtheor = 2 / np.sqrt(n) std = (average_corrected_fourier_diff / n).std() np.abs((std - stdtheor) / stdtheor) < tolerance `

strictbool, default False

Decide what to do if the condition on tolerance is not met. If True, raise a RuntimeError. If False, just throw a warning.

output_filestr, default None

Name of an output file (any extension automatically recognized by Astropy is fine)

stingray.deadtime.fad.calculate_FAD_correction(lc1, lc2, segment_size, norm='frac', gti=None, plot=False, ax=None, smoothing_alg='gauss', smoothing_length=None, verbose=False, tolerance=0.05, strict=False, output_file=None, return_objects=False)[source]

Calculate Frequency Amplitude Difference-corrected (cross)power spectra.

Reference: Bachetti & Huppenkothen, 2018, ApJ, 853L, 21

The two input light curve must be strictly simultaneous, and recorded by two independent detectors with similar responses, so that the count rates are similar and dead time is independent. The method does not apply to different energy channels of the same instrument, or to the signal observed by two instruments with very different responses. See the paper for caveats.

Parameters:
lc1: class:`stingray.ligthtcurve.Lightcurve`

Light curve from channel 1

lc2: class:`stingray.ligthtcurve.Lightcurve`

Light curve from channel 2. Must be strictly simultaneous to lc1 and have the same binning time. Also, it must be strictly independent, e.g. from a different detector. There must be no dead time cross-talk between the two light curves.

segment_size: float

The final Fourier products are averaged over many segments of the input light curves. This is the length of each segment being averaged. Note that the light curve must be long enough to have at least 30 segments, as the result gets better as one averages more and more segments.

norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``

The normalization of the (real part of the) cross spectrum.

Returns:
resultsclass:astropy.table.Table object or dict or str

The content of results depends on whether return_objects is True or False. If return_objects==False, results is a Table with the following columns:

  • pds1: the corrected PDS of lc1

  • pds2: the corrected PDS of lc2

  • cs: the corrected cospectrum

  • ptot: the corrected PDS of lc1 + lc2

If return_objects is True, results is a dict, with keys named like the columns listed above but with AveragePowerspectrum or AverageCrossspectrum objects instead of arrays.

Other Parameters:
plotbool, default False

Plot diagnostics: check if the smoothed Fourier difference scatter is a good approximation of the data scatter.

axmatplotlib.axes.axes object
If not None and plot is True, use this axis object to produce

the diagnostic plot. Otherwise, create a new figure.

smoothing_alg{‘gauss’, …}

Smoothing algorithm. For now, the only smoothing algorithm allowed is gauss, which applies a Gaussian Filter from scipy.

smoothing_lengthint, default segment_size * 3

Number of bins to smooth in gaussian window smoothing

verbose: bool, default False

Print out information on the outcome of the algorithm (recommended)

tolerancefloat, default 0.05

Accepted relative error on the FAD-corrected Fourier amplitude, to be used as success diagnostics. Should be ` stdtheor = 2 / np.sqrt(n) std = (average_corrected_fourier_diff / n).std() np.abs((std - stdtheor) / stdtheor) < tolerance `

strictbool, default False

Decide what to do if the condition on tolerance is not met. If True, raise a RuntimeError. If False, just throw a warning.

output_filestr, default None

Name of an output file (any extension automatically recognized by Astropy is fine)

stingray.deadtime.fad.get_periodograms_from_FAD_results(FAD_results, kind='ptot')[source]

Get Stingray periodograms from FAD results.

Parameters:
FAD_resultsastropy.table.Table object or str

Results from calculate_FAD_correction, either as a Table or an output file name

kindstr, one of [‘ptot’, ‘pds1’, ‘pds2’, ‘cs’]

Kind of periodogram to get (E.g., ‘ptot’ -> PDS from the sum of the two light curves, ‘cs’ -> cospectrum, etc.)

Returns:
resultsAveragedCrossspectrum or Averagedpowerspectrum object

The periodogram.

stingray.deadtime.model.check_A(rate, td, tb, max_k=100, save_to=None, linthresh=1e-06, rate_is_incident=True)[source]

Test that A is well-behaved.

This function produces a plot of \(A_k - r_0^2 t_b^2\) vs \(k\), to visually check that \(A_k \rightarrow r_0^2 t_b^2\) for \(k\rightarrow\infty\), as per Eq. 43 in Zhang+95.

With this function is possible to determine how many inner loops k (limit_k in function pds_model_zhang) are necessary for a correct approximation of the dead time model

Parameters:
ratefloat

Count rate, either incident or detected (use the rate_is_incident bool to specify)

tdfloat

Dead time

tbfloat

Bin time of the light curve

Other Parameters:
max_kint

Maximum k to plot

save_tostr, default None

If not None, save the plot to this file

linthreshfloat, default 0.000001

Linear threshold for the “symlog” scale of the plot

rate_is_incidentbool, default True

If True, the input rate is the incident count rate. If False, it is the detected one.

stingray.deadtime.model.check_B(rate, td, tb, max_k=100, save_to=None, linthresh=1e-06, rate_is_incident=True)[source]

Check that \(B\rightarrow 0\) for \(k\rightarrow \infty\).

This function produces a plot of \(B_k\) vs \(k\), to visually check that \(B_k \rightarrow 0\) for \(k\rightarrow\infty\), as per Eq. 43 in Zhang+95.

With this function is possible to determine how many inner loops k (limit_k in function pds_model_zhang) are necessary for a correct approximation of the dead time model

Parameters:
ratefloat

Count rate, either incident or detected (use the rate_is_incident bool to specify)

tdfloat

Dead time

tbfloat

Bin time of the light curve

Other Parameters:
max_kint

Maximum k to plot

save_tostr, default None

If not None, save the plot to this file

linthreshfloat, default 0.000001

Linear threshold for the “symlog” scale of the plot

rate_is_incidentbool, default True

If True, the input rate is the incident count rate. If False, it is the detected one.

stingray.deadtime.model.non_paralyzable_dead_time_model(freqs, dead_time, rate, bin_time=None, limit_k=200, background_rate=0.0, n_approx=None)[source]

Calculate the dead-time-modified power spectrum.

Parameters:
freqsarray of floats

Frequency array

dead_timefloat

Dead time

ratefloat

Detected source count rate

Returns:
powerarray of floats

Power spectrum

Other Parameters:
bin_timefloat

Bin time of the light curve

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

stingray.deadtime.model.pds_model_zhang(N, rate, td, tb, limit_k=60, rate_is_incident=True)[source]

Calculate the dead-time-modified power spectrum.

Parameters:
Nint

The number of spectral bins

ratefloat

Incident count rate

tdfloat

Dead time

tbfloat

Bin time of the light curve

Returns:
freqsarray of floats

Frequency array

powerarray of floats

Power spectrum

Other Parameters:
limit_kint

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

rate_is_incidentbool, default True

If True, the input rate is the incident count rate. If False, it is the detected count rate.

stingray.deadtime.model.r_det(td, r_i)[source]

Calculate detected countrate given dead time and incident countrate.

Parameters:
tdfloat

Dead time

r_ifloat

Incident countrate

stingray.deadtime.model.r_in(td, r_0)[source]

Calculate incident countrate given dead time and detected countrate.

Parameters:
tdfloat

Dead time

r_0float

Detected countrate


Higher-Order Fourier and Spectral Timing Products

These classes implement higher-order Fourier analysis products (e.g. Bispectrum) and Spectral Timing related methods taking advantage of both temporal and spectral information in modern data sets.

Bispectrum

class stingray.bispectrum.Bispectrum(lc, maxlag=None, window=None, scale='biased')[source]

Makes a Bispectrum object from a stingray.Lightcurve.

Bispectrum is a higher order time series analysis method and is calculated by indirect method as Fourier transform of triple auto-correlation function also called as 3rd order cumulant.

Parameters:
lcstingray.Lightcurve object

The light curve data for bispectrum calculation.

maxlagint, optional, default None

Maximum lag on both positive and negative sides of 3rd order cumulant (Similar to lags in correlation). if None, max lag is set to one-half of length of light curve.

window{uniform, parzen, hamming, hanning, triangular, welch, blackman, flat-top}, optional, default ‘uniform’

Type of window function to apply to the data.

scale{biased, unbiased}, optional, default biased

Flag to decide biased or unbiased normalization for 3rd order cumulant function.

References

1) The biphase explained: understanding the asymmetries invcoupled Fourier components of astronomical timeseries by Thomas J. Maccarone Department of Physics, Box 41051, Science Building, Texas Tech University, Lubbock TX 79409-1051 School of Physics and Astronomy, University of Southampton, SO16 4ES

2) T. S. Rao, M. M. Gabr, An Introduction to Bispectral Analysis and Bilinear Time Series Models, Lecture Notes in Statistics, Volume 24, D. Brillinger, S. Fienberg, J. Gani, J. Hartigan, K. Krickeberg, Editors, Springer-Verlag, New York, NY, 1984.

3) Matlab version of bispectrum under following link. https://www.mathworks.com/matlabcentral/fileexchange/60-bisp3cum

Examples

>> from stingray.lightcurve import Lightcurve
>> from stingray.bispectrum import Bispectrum
>> lc = Lightcurve([1,2,3,4,5],[2,3,1,1,2])
>> bs = Bispectrum(lc,maxlag=1)
>> bs.lags
array([-1.,  0.,  1.])
>> bs.freq
array([-0.5,  0.,  0.5])
>> bs.cum3
array([[-0.2976,  0.1024,  0.1408],
    [ 0.1024,  0.144, -0.2976],
    [ 0.1408, -0.2976,  0.1024]])
>> bs.bispec_mag
array([[ 1.26336794,  0.0032   ,  0.0032    ],
    [ 0.0032   ,  0.16     ,  0.0032    ],
    [ 0.0032   ,  0.0032   ,  1.26336794]])
>> bs.bispec_phase
array([[ -9.65946229e-01,   2.25347190e-14,   3.46944695e-14],
    [  0.00000000e+00,   3.14159265e+00,   0.00000000e+00],
    [ -3.46944695e-14,  -2.25347190e-14,   9.65946229e-01]])
Attributes:
lcstingray.Lightcurve object

The light curve data to compute the Bispectrum.

fsfloat

Sampling frequencies

nint

Total Number of samples of light curve observations.

maxlagint

Maximum lag on both positive and negative sides of 3rd order cumulant (similar to lags in correlation)

signalnumpy.ndarray

Row vector of light curve counts for matrix operations

scale{biased, unbiased}

Flag to decide biased or unbiased normalization for 3rd order cumulant function.

lagsnumpy.ndarray

An array of time lags for which 3rd order cumulant is calculated

freqnumpy.ndarray

An array of freq values for Bispectrum.

cum3numpy.ndarray

A maxlag*2+1 x maxlag*2+1 matrix containing 3rd order cumulant data for different lags.

bispecnumpy.ndarray

A`` maxlag*2+1 x maxlag*2+1`` matrix containing bispectrum data for different frequencies.

bispec_magnumpy.ndarray

Magnitude of the bispectrum

bispec_phasenumpy.ndarray

Phase of the bispectrum

plot_cum3(axis=None, save=False, filename=None)[source]

Plot the 3rd order cumulant as function of time lags using matplotlib. Plot the cum3 attribute on a graph with the lags attribute on x-axis and y-axis and cum3 on z-axis

Parameters:
axislist, tuple, string, default None

Parameter to set axis properties of matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for matplotlib.pyplot.axis() method.

savebool, optionalm, default False

If True, save the figure with specified filename.

filenamestr

File name and path of the image to save. Depends on the boolean save.

Returns:
pltmatplotlib.pyplot object

Reference to plot, call show() to display it

plot_mag(axis=None, save=False, filename=None)[source]

Plot the magnitude of bispectrum as function of freq using matplotlib. Plot the bispec_mag attribute on a graph with freq attribute on the x-axis and y-axis and the bispec_mag attribute on the z-axis.

Parameters:
axislist, tuple, string, default None

Parameter to set axis properties of matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for matplotlib.pyplot.axis() method.

savebool, optional, default False

If True, save the figure with specified filename and path.

filenamestr

File name and path of the image to save. Depends on the bool save.

Returns:
pltmatplotlib.pyplot object

Reference to plot, call show() to display it

plot_phase(axis=None, save=False, filename=None)[source]

Plot the phase of bispectrum as function of freq using matplotlib. Plot the bispec_phase attribute on a graph with phase attribute on the x-axis and y-axis and the bispec_phase attribute on the z-axis.

Parameters:
axislist, tuple, string, default None

Parameter to set axis properties of matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for matplotlib.pyplot.axis() function.

savebool, optional, default False

If True, save the figure with specified filename and path.

filenamestr

File name and path of the image to save. Depends on the bool save.

Returns:
pltmatplotlib.pyplot object

Reference to plot, call show() to display it


Covariancespectrum

class stingray.Covariancespectrum(data, dt=None, band_interest=None, ref_band_interest=None, std=None)[source]

Compute a covariance spectrum for the data. The input data can be either in event data or pre-made light curves. Event data can either be in the form of a numpy.ndarray with (time stamp, energy) pairs or a stingray.events.EventList object. If light curves are formed ahead of time, then a list of stingray.Lightcurve objects should be passed to the object, ideally one light curve for each band of interest.

For the case where the data is input as a list of stingray.Lightcurve objects, the reference band(s) should either be

  1. a single stingray.Lightcurve object,

  2. a list of stingray.Lightcurve objects with the reference band for each band of interest pre-made, or

  3. None, in which case reference bands will formed by combining all light curves except for the band of interest.

In the case of event data, band_interest and ref_band_interest can be (multiple) pairs of energies, and the light curves for the bands of interest and reference bands will be produced dynamically.

Parameters:
data{numpy.ndarray | stingray.events.EventList object | list of stingray.Lightcurve objects}

data contains the time series data, either in the form of a 2-D array of (time stamp, energy) pairs for event data, or as a list of light curves. Note : The event list must be in sorted order with respect to the times of arrivals.

dtfloat

The time resolution of the stingray.Lightcurve formed from the energy bin. Only used if data is an event list.

band_interest{None, iterable of tuples}

If None, all possible energy values will be assumed to be of interest, and a covariance spectrum in the highest resolution will be produced. Note: if the input is a list of stingray.Lightcurve objects, then the user may supply their energy values here, for construction of a reference band.

ref_band_interest{None, tuple, stingray.Lightcurve, list of stingray.Lightcurve objects}

Defines the reference band to be used for comparison with the bands of interest. If None, all bands except the band of interest will be used for each band of interest, respectively. Alternatively, a tuple can be given for event list data, which will extract the reference band (always excluding the band of interest), or one may put in a single stingray.Lightcurve object to be used (the same for each band of interest) or a list of stingray.Lightcurve objects, one for each band of interest.

stdfloat or np.array or list of numbers

The term std is used to calculate the excess variance of a band. If std is set to None, default Poisson case is taken and the std is calculated as mean(lc)**0.5. In the case of a single float as input, the same is used as the standard deviation which is also used as the std. And if the std is an iterable of numbers, their mean is used for the same purpose.

References

[1] Wilkinson, T. and Uttley, P. (2009), Accretion disc variability in the hard state of black hole X-ray binaries. Monthly Notices of the Royal Astronomical Society, 397: 666–676. doi: 10.1111/j.1365-2966.2009.15008.x

Examples

See the notebooks repository for detailed notebooks on the code.

Attributes:
unnorm_covarnp.ndarray

An array of arrays with mid point band_interest and their covariance. It is the array-form of the dictionary energy_covar. The covariance values are unnormalized.

covarnp.ndarray

Normalized covariance spectrum.

covar_errornp.ndarray

Errors of the normalized covariance spectrum.


AveragedCovariancespectrum

class stingray.AveragedCovariancespectrum(data, segment_size, dt=None, band_interest=None, ref_band_interest=None, std=None)[source]

Compute a covariance spectrum for the data, defined in [covar spectrum]_ Equation 15.

Parameters:
data{numpy.ndarray | list of stingray.Lightcurve objects}

data contains the time series data, either in the form of a 2-D array of (time stamp, energy) pairs for event data, or as a list of stingray.Lightcurve objects. Note : The event list must be in sorted order with respect to the times of arrivals.

segment_sizefloat

The length of each segment in the averaged covariance spectrum. The number of segments will be calculated automatically using the total length of the data set and the segment_size defined here.

dtfloat

The time resolution of the stingray.Lightcurve formed from the energy bin. Only used if data is an event list.

band_interest{None, iterable of tuples}

If None, all possible energy values will be assumed to be of interest, and a covariance spectrum in the highest resolution will be produced. Note: if the input is a list of stingray.Lightcurve objects, then the user may supply their energy values here, for construction of a reference band.

ref_band_interest{None, tuple, stingray.Lightcurve, list of stingray.Lightcurve objects}

Defines the reference band to be used for comparison with the bands of interest. If None, all bands except the band of interest will be used for each band of interest, respectively. Alternatively, a tuple can be given for event list data, which will extract the reference band (always excluding the band of interest), or one may put in a single stingray.Lightcurve object to be used (the same for each band of interest) or a list of stingray.Lightcurve objects, one for each band of interest.

stdfloat or np.array or list of numbers

The term std is used to calculate the excess variance of a band. If std is set to None, default Poisson case is taken and the std is calculated as mean(lc)**0.5. In the case of a single float as input, the same is used as the standard deviation which is also used as the std. And if the std is an iterable of numbers, their mean is used for the same purpose.

References

Attributes:
unnorm_covarnp.ndarray

An array of arrays with mid point band_interest and their covariance. It is the array-form of the dictionary energy_covar. The covariance values are unnormalized.

covarnp.ndarray

Normalized covariance spectrum.

covar_errornp.ndarray

Errors of the normalized covariance spectrum.


VarEnergySpectrum

Abstract base class for spectral timing products including both variability and spectral information.

class stingray.varenergyspectrum.VarEnergySpectrum(events, freq_interval, energy_spec, ref_band=None, bin_time=1, use_pi=False, segment_size=None, events2=None, return_complex=False)[source]
property energy

Give the centers of the energy intervals.

from_astropy_table(*args, **kwargs)[source]

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

from_pandas(*args, **kwargs)[source]

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

from_xarray(*args, **kwargs)[source]

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

main_array_attr = 'energy'

Base class for variability-energy spectrum.

This class is only a base for the various variability spectra, and it’s not to be instantiated by itself.

Parameters:
eventsstingray.events.EventList object

event list

freq_interval[f0, f1], floats

the frequency range over which calculating the variability quantity

energy_speclist or tuple (emin, emax, N, type)

if a list is specified, this is interpreted as a list of bin edges; if a tuple is provided, this will encode the minimum and maximum energies, the number of intervals, and lin or log.

Other Parameters:
ref_band[emin, emax], floats; default None

minimum and maximum energy of the reference band. If None, the full band is used.

use_pibool, default False

Use channel instead of energy

events2stingray.events.EventList object

event list for the second channel, if not the same. Useful if the reference band has to be taken from another detector.

return_complex: bool, default False

In spectra that produce complex values, return the whole spectrum. Otherwise, the absolute value will be returned.

Attributes:
events1array-like

list of events used to produce the spectrum

events2array-like

if the spectrum requires it, second list of events

freq_intervalarray-like

interval of frequencies used to calculate the spectrum

energy_intervals[[e00, e01], [e10, e11], ...]

energy intervals used for the spectrum

spectrumarray-like

the spectral values, corresponding to each energy interval

spectrum_errorarray-like

the error bars corresponding to spectrum

energyarray-like

The centers of energy intervals


RmsEnergySpectrum

stingray.varenergyspectrum.RmsEnergySpectrum

alias of RmsSpectrum


LagEnergySpectrum

stingray.varenergyspectrum.LagEnergySpectrum

alias of LagSpectrum


ExcessVarianceSpectrum

class stingray.varenergyspectrum.ExcessVarianceSpectrum(events, freq_interval, energy_spec, bin_time=1, use_pi=False, segment_size=None, normalization='fvar')[source]

Calculate the Excess Variance spectrum.

For each energy interval, calculate the excess variance in the specified frequency range.

Parameters:
eventsstingray.events.EventList object

event list

freq_interval[f0, f1], list of float

the frequency range over which calculating the variability quantity

energy_speclist or tuple (emin, emax, N, type)

if a list is specified, this is interpreted as a list of bin edges; if a tuple is provided, this will encode the minimum and maximum energies, the number of intervals, and lin or log.

Other Parameters:
ref_band[emin, emax], floats; default None

minimum and maximum energy of the reference band. If None, the full band is used.

use_pibool, default False

Use channel instead of energy

Attributes:
events1array-like

list of events used to produce the spectrum

freq_intervalarray-like

interval of frequencies used to calculate the spectrum

energy_intervals[[e00, e01], [e10, e11], ...]

energy intervals used for the spectrum

spectrumarray-like

the spectral values, corresponding to each energy interval

spectrum_errorarray-like

the errorbars corresponding to spectrum

add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

data_attributes() list[str]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

dict() dict

Return a dictionary representation of the object.

property energy

Give the centers of the energy intervals.

from_astropy_table(*args, **kwargs)

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

from_pandas(*args, **kwargs)

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

from_xarray(*args, **kwargs)

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict

Give a dictionary with all non-None meta attrs of the object.

internal_array_attrs() list[str]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

main_array_attr = 'energy'

Base class for variability-energy spectrum.

This class is only a base for the various variability spectra, and it’s not to be instantiated by itself.

Parameters:
eventsstingray.events.EventList object

event list

freq_interval[f0, f1], floats

the frequency range over which calculating the variability quantity

energy_speclist or tuple (emin, emax, N, type)

if a list is specified, this is interpreted as a list of bin edges; if a tuple is provided, this will encode the minimum and maximum energies, the number of intervals, and lin or log.

Other Parameters:
ref_band[emin, emax], floats; default None

minimum and maximum energy of the reference band. If None, the full band is used.

use_pibool, default False

Use channel instead of energy

events2stingray.events.EventList object

event list for the second channel, if not the same. Useful if the reference band has to be taken from another detector.

return_complex: bool, default False

In spectra that produce complex values, return the whole spectrum. Otherwise, the absolute value will be returned.

Attributes:
events1array-like

list of events used to produce the spectrum

events2array-like

if the spectrum requires it, second list of events

freq_intervalarray-like

interval of frequencies used to calculate the spectrum

energy_intervals[[e00, e01], [e10, e11], ...]

energy intervals used for the spectrum

spectrumarray-like

the spectral values, corresponding to each energy interval

spectrum_errorarray-like

the error bars corresponding to spectrum

energyarray-like

The centers of energy intervals

meta_attrs() list[str]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

to_astropy_table(no_longdouble=False) Table

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_pandas() DataFrame

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

write(filename: str, fmt: str | None = None) None

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


Utilities

Commonly used utility functionality, including Good Time Interval operations and input/output helper methods.

Statistical Functions

stingray.stats.a_from_pf(p)[source]

Fractional amplitude of modulation from pulsed fraction

If the pulsed profile is defined as p = mean * (1 + a * sin(phase)),

we define “pulsed fraction” as 2a/b, where b = mean + a is the maximum and a is the amplitude of the modulation.

Hence, a = pf / (2 - pf)

Examples

>>> a_from_pf(1)
1.0
>>> a_from_pf(0)
0.0
stingray.stats.a_from_ssig(ssig, ncounts)[source]

Amplitude of a sinusoid corresponding to a given Z/PDS value

From Leahy et al. 1983, given a pulse profile p = lambda * (1 + a * sin(phase)), The theoretical value of Z^2_n is Ncounts / 2 * a^2

Note that if there are multiple sinusoidal components, one can use a = sqrt(sum(a_l)) (Bachetti+2021b)

Examples

>>> assert np.isclose(a_from_ssig(150, 30000), 0.1)
stingray.stats.amplitude_upper_limit(pmeas, counts, n=1, c=0.95, fft_corr=False, nyq_ratio=0)[source]

Upper limit on a sinusoidal modulation, given a measured power in the PDS/Z search.

Eq. 10 in Vaughan+94 and a_from_ssig: they are equivalent but Vaughan+94 corrects further for the response inside an FFT bin and at frequencies close to Nyquist. These two corrections are added by using fft_corr=True and nyq_ratio to the correct \(f / f_{Nyq}\) of the FFT peak

To understand the meaning of this amplitude: if the modulation is described by:

..math:: p = overline{p} (1 + a * sin(x))

this function returns a.

If it is a sum of sinusoidal harmonics instead ..math:: p = overline{p} (1 + sum_l a_l * sin(lx)) a is equivalent to \(\sqrt(\sum_l a_l^2)\).

See power_upper_limit

Parameters:
pmeas: float

The measured value of power

counts: int

The number of counts in the light curve used to calculate the spectrum

Returns:
a: float

The modulation amplitude that could produce P>pmeas with 1 - c probability

Other Parameters:
n: int

The number of summed powers to obtain pmeas. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n

c: float

The confidence value for the probability (e.g. 0.95 = 95%)

fft_corr: bool

Apply a correction for the expected power concentrated in an FFT bin, which is about 0.773 on average (it’s 1 at the center of the bin, 2/pi at the bin edge.

nyq_ratio: float

Ratio of the frequency of this feature with respect to the Nyquist frequency. Important to know when dealing with FFTs, because the FFT response decays between 0 and f_Nyq similarly to the response inside a frequency bin: from 1 at 0 Hz to ~2/pi at f_Nyq

Examples

>>> aup = amplitude_upper_limit(40, 30000, 1, 0.99)
>>> aup_nyq = amplitude_upper_limit(40, 30000, 1, 0.99, nyq_ratio=1)
>>> assert np.isclose(aup_nyq, aup / (2 / np.pi))
>>> aup_corr = amplitude_upper_limit(40, 30000, 1, 0.99, fft_corr=True)
>>> assert np.isclose(aup_corr, aup / np.sqrt(0.773))
stingray.stats.classical_pvalue(power, nspec)[source]

Note: This is stingray’s original implementation of the probability distribution for the power spectrum. It is superseded by the implementation in pds_probability for practical purposes, but remains here for backwards compatibility and for its educational value as a clear, explicit implementation of the correct probability distribution.

Compute the probability of detecting the current power under the assumption that there is no periodic oscillation in the data.

This computes the single-trial p-value that the power was observed under the null hypothesis that there is no signal in the data.

Important: the underlying assumptions that make this calculation valid are:

  1. the powers in the power spectrum follow a chi-square distribution

  2. the power spectrum is normalized according to [Leahy 1983]_, such that the powers have a mean of 2 and a variance of 4

  3. there is only white noise in the light curve. That is, there is no aperiodic variability that would change the overall shape of the power spectrum.

Also note that the p-value is for a single trial, i.e. the power currently being tested. If more than one power or more than one power spectrum are being tested, the resulting p-value must be corrected for the number of trials (Bonferroni correction).

Mathematical formulation in [Groth 1975]_. Original implementation in IDL by Anna L. Watts.

Parameters:
powerfloat

The squared Fourier amplitude of a spectrum to be evaluated

nspecint

The number of spectra or frequency bins averaged in power. This matters because averaging spectra or frequency bins increases the signal-to-noise ratio, i.e. makes the statistical distributions of the noise narrower, such that a smaller power might be very significant in averaged spectra even though it would not be in a single power spectrum.

Returns:
pvalfloat

The classical p-value of the observed power being consistent with the null hypothesis of white noise

References

stingray.stats.equivalent_gaussian_Nsigma(p)[source]

Number of Gaussian sigmas corresponding to tail probability.

This function computes the value of the characteristic function of a standard Gaussian distribution for the tail probability equivalent to the provided p-value, and turns this value into units of standard deviations away from the Gaussian mean. This allows the user to make a statement about the signal such as “I detected this pulsation at 4.1 sigma

The example values below are obtained by brute-force integrating the Gaussian probability density function using the mpmath library between Nsigma and +inf.

Examples

>>> assert np.isclose(equivalent_gaussian_Nsigma(0.15865525393145707), 1,
...                   atol=0.01)
>>> assert np.isclose(equivalent_gaussian_Nsigma(0.0013498980316301035), 3,
...                   atol=0.01)
>>> assert np.isclose(equivalent_gaussian_Nsigma(9.865877e-10), 6,
...                   atol=0.01)
>>> assert np.isclose(equivalent_gaussian_Nsigma(6.22096e-16), 8,
...                   atol=0.01)
>>> assert np.isclose(equivalent_gaussian_Nsigma(3.0567e-138), 25, atol=0.1)
stingray.stats.fold_detection_level(nbin, epsilon=0.01, ntrial=1)[source]

Return the detection level for a folded profile.

See Leahy et al. (1983).

Parameters:
nbinint

The number of bins in the profile

epsilonfloat, default 0.01

The fractional probability that the signal has been produced by noise

Returns:
detlevfloat

The epoch folding statistics corresponding to a probability epsilon * 100 % that the signal has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.fold_profile_logprobability(stat, nbin, ntrial=1)[source]

Calculate the probability of a certain folded profile, due to noise.

Parameters:
statfloat

The epoch folding statistics

nbinint

The number of bins in the profile

Returns:
logpfloat

The log-probability that the profile has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.fold_profile_probability(stat, nbin, ntrial=1)[source]

Calculate the probability of a certain folded profile, due to noise.

Parameters:
statfloat

The epoch folding statistics

nbinint

The number of bins in the profile

Returns:
pfloat

The probability that the profile has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.p_multitrial_from_single_trial(p1, n)[source]

Calculate a multi-trial p-value from a single-trial one.

Calling p the probability of a single success, the Binomial distributions says that the probability at least one outcome in n trials is

\[P(k\geq 1) = \sum_{k\geq 1} \binom{n}{k} p^k (1-p)^{(n-k)}\]

or more simply, using P(k ≥ 0) = 1

\[P(k\geq 1) = 1 - \binom{n}{0} (1-p)^n = 1 - (1-p)^n\]
Parameters:
p1float

The significance at which we reject the null hypothesis on each single trial.

nint

The number of trials

Returns:
pnfloat

The significance at which we reject the null hypothesis after multiple trials

stingray.stats.p_single_trial_from_p_multitrial(pn, n)[source]

Calculate the single-trial p-value from a total p-value

Let us say that we want to reject a null hypothesis at the pn level, after executing n different measurements. This might be the case because, e.g., we want to have a 1% probability of detecting a signal in an entire power spectrum, and we need to correct the detection level accordingly.

The typical procedure is dividing the initial probability (often called _epsilon_) by the number of trials. This is called the Bonferroni correction and it is often a good approximation, when pn is low: p1 = pn / n.

However, if pn is close to 1, this approximation gives incorrect results.

Here we calculate this probability by inverting the Binomial problem. Given that (see p_multitrial_from_single_trial) the probability of getting more than one hit in n trials, given the single-trial probability p, is

\[P (k \geq 1) = 1 - (1 - p)^n,\]

we get the single trial probability from the multi-trial one from

\[p = 1 - (1 - P)^{(1/n)}\]

This is also known as Šidák correction.

Parameters:
pnfloat

The significance at which we want to reject the null hypothesis after multiple trials

nint

The number of trials

Returns:
p1float

The significance at which we reject the null hypothesis on each single trial.

stingray.stats.pds_detection_level(epsilon=0.01, ntrial=1, n_summed_spectra=1, n_rebin=1)[source]

Detection level for a PDS.

Return the detection level (with probability 1 - epsilon) for a Power Density Spectrum of nbins bins, normalized a la Leahy (1983), based on the 2-dof \({\chi}^2\) statistics, corrected for rebinning (n_rebin) and multiple PDS averaging (n_summed_spectra)

Parameters:
epsilonfloat

The single-trial probability value(s)

Other Parameters:
ntrialint

The number of independent trials (the independent bins of the PDS)

n_summed_spectraint

The number of power density spectra that have been averaged to obtain this power level

n_rebinint

The number of power density bins that have been averaged to obtain this power level

Examples

>>> assert np.isclose(pds_detection_level(0.1), 4.6, atol=0.1)
>>> assert np.allclose(pds_detection_level(0.1, n_rebin=[1]), [4.6], atol=0.1)
stingray.stats.pds_probability(level, ntrial=1, n_summed_spectra=1, n_rebin=1)[source]

Give the probability of a given power level in PDS.

Return the probability of a certain power level in a Power Density Spectrum of nbins bins, normalized a la Leahy (1983), based on the 2-dof \({\chi}^2\) statistics, corrected for rebinning (n_rebin) and multiple PDS averaging (n_summed_spectra)

Parameters:
levelfloat or array of floats

The power level for which we are calculating the probability

Returns:
epsilonfloat

The probability value(s)

Other Parameters:
ntrialint

The number of independent trials (the independent bins of the PDS)

n_summed_spectraint

The number of power density spectra that have been averaged to obtain this power level

n_rebinint

The number of power density bins that have been averaged to obtain this power level

stingray.stats.pf_from_a(a)[source]

Pulsed fraction from fractional amplitude of modulation.

If the pulsed profile is defined as p = mean * (1 + a * sin(phase)),

we define “pulsed fraction” as 2a/b, where b = mean + a is the maximum and a is the amplitude of the modulation.

Hence, pulsed fraction = 2a/(1+a)

Examples

>>> pf_from_a(1)
1.0
>>> pf_from_a(0)
0.0
stingray.stats.pf_from_ssig(ssig, ncounts)[source]

Estimate pulsed fraction for a sinusoid from a given Z or PDS power.

See a_from_ssig and pf_from_a for more details

Examples

>>> assert np.isclose(round(a_from_pf(pf_from_ssig(150, 30000)), 1), 0.1)
stingray.stats.pf_upper_limit(*args, **kwargs)[source]

Upper limit on pulsed fraction, given a measured power in the PDS/Z search.

See power_upper_limit and pf_from_ssig. All arguments are the same as amplitude_upper_limit

Parameters:
pmeas: float

The measured value of power

counts: int

The number of counts in the light curve used to calculate the spectrum

Returns:
pf: float

The pulsed fraction that could produce P>pmeas with 1 - c probability

Other Parameters:
n: int

The number of summed powers to obtain pmeas. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n

c: float

The confidence value for the probability (e.g. 0.95 = 95%)

fft_corr: bool

Apply a correction for the expected power concentrated in an FFT bin, which is about 0.773 on average (it’s 1 at the center of the bin, 2/pi at the bin edge.

nyq_ratio: float

Ratio of the frequency of this feature with respect to the Nyquist frequency. Important to know when dealing with FFTs, because the FFT response decays between 0 and f_Nyq similarly to the response inside a frequency bin: from 1 at 0 Hz to ~2/pi at f_Nyq

Examples

>>> pfup = pf_upper_limit(40, 30000, 1, 0.99)
>>> assert np.isclose(pfup, 0.13, atol=0.01)
stingray.stats.phase_dispersion_detection_level(nsamples, nbin, epsilon=0.01, ntrial=1)[source]

Return the detection level for a phase dispersion minimization periodogram..

Parameters:
nsamplesint

The number of time bins in the light curve

nbinint

The number of bins in the profile

epsilonfloat, default 0.01

The fractional probability that the signal has been produced by noise

Returns:
detlevfloat

The epoch folding statistics corresponding to a probability epsilon * 100 % that the signal has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.phase_dispersion_logprobability(stat, nsamples, nbin, ntrial=1)[source]

Calculate the log-probability of a peak in a phase dispersion minimization periodogram, due to noise.

Uses the beta-distribution from Czerny-Schwarzendorf (1997).

Parameters:
statfloat

The value of the PDM inverse peak

nsamplesint

The number of samples in the time series

nbinint

The number of bins in the profile

Returns:
logpfloat

The log-probability that the profile has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.phase_dispersion_probability(stat, nsamples, nbin, ntrial=1)[source]

Calculate the probability of a peak in a phase dispersion minimization periodogram, due to noise.

Uses the beta-distribution from Czerny-Schwarzendorf (1997).

Parameters:
statfloat

The value of the PDM inverse peak

nsamplesint

The number of samples in the time series

nbinint

The number of bins in the profile

Returns:
pfloat

The probability that the profile has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.power_confidence_limits(preal, n=1, c=0.95)[source]

Confidence limits on power, given a (theoretical) signal power.

This is to be used when we expect a given power (e.g. from the pulsed fraction measured in previous observations) and we want to know the range of values the measured power could take to a given confidence level. Adapted from Vaughan et al. 1994, noting that, after appropriate normalization of the spectral stats, the distribution of powers in the PDS and the Z^2_n searches is always described by a noncentral chi squared distribution.

Parameters:
preal: float

The theoretical signal-generated value of power

Returns:
pmeas: [float, float]

The upper and lower confidence interval (a, 1-a) on the measured power

Other Parameters:
n: int

The number of summed powers to obtain the result. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n

c: float

The confidence level (e.g. 0.95=95%)

Examples

>>> cl = power_confidence_limits(150, c=0.84)
>>> assert np.allclose(cl, [127, 176], atol=1)
stingray.stats.power_upper_limit(pmeas, n=1, c=0.95)[source]

Upper limit on signal power, given a measured power in the PDS/Z search.

Adapted from Vaughan et al. 1994, noting that, after appropriate normalization of the spectral stats, the distribution of powers in the PDS and the Z^2_n searches is always described by a noncentral chi squared distribution.

Note that Vaughan+94 gives p(pmeas | preal), while we are interested in p(real | pmeas), which is not described by the NCX2 stat. Rather than integrating the CDF of this probability distribution, we start from a reasonable approximation and fit to find the preal that gives pmeas as a (e.g.95%) confidence limit.

As Vaughan+94 shows, this power is always larger than the observed one. This is because we are looking for the maximum signal power that, combined with noise powers, would give the observed power. This involves the possibility that noise powers partially cancel out some signal power.

Parameters:
pmeas: float

The measured value of power

Returns:
psig: float

The signal power that could produce P>pmeas with 1 - c probability

Other Parameters:
n: int

The number of summed powers to obtain pmeas. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n

c: float

The confidence value for the probability (e.g. 0.95 = 95%)

Examples

>>> pup = power_upper_limit(40, 1, 0.99)
>>> assert np.isclose(pup, 75, atol=2)
stingray.stats.ssig_from_a(a, ncounts)[source]

Theoretical power in the Z or PDS search for a sinusoid of amplitude a.

From Leahy et al. 1983, given a pulse profile p = lambda * (1 + a * sin(phase)), The theoretical value of Z^2_n is Ncounts / 2 * a^2

Note that if there are multiple sinusoidal components, one can use a = sqrt(sum(a_l)) (Bachetti+2021b)

Examples

>>> round(ssig_from_a(0.1, 30000), 1)
150.0
stingray.stats.ssig_from_pf(pf, ncounts)[source]

Theoretical power in the Z or PDS for a sinusoid of pulsed fraction pf.

See ssig_from_a and a_from_pf for more details

Examples

>>> assert round(ssig_from_pf(pf_from_a(0.1), 30000), 1) == 150.0
stingray.stats.z2_n_detection_level(n=2, epsilon=0.01, ntrial=1, n_summed_spectra=1)[source]

Return the detection level for the Z^2_n statistics.

See Buccheri et al. (1983), Bendat and Piersol (1971).

Parameters:
nint, default 2

The n in $Z^2_n$ (number of harmonics, including the fundamental)

epsilonfloat, default 0.01

The fractional probability that the signal has been produced by noise

Returns:
detlevfloat

The epoch folding statistics corresponding to a probability epsilon * 100 % that the signal has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

n_summed_spectraint

Number of Z_2^n periodograms that are being averaged

stingray.stats.z2_n_logprobability(z2, n, ntrial=1, n_summed_spectra=1)[source]

Calculate the probability of a certain folded profile, due to noise.

Parameters:
z2float

A Z^2_n statistics value

nint, default 2

The n in $Z^2_n$ (number of harmonics, including the fundamental)

Returns:
pfloat

The probability that the Z^2_n value has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

n_summed_spectraint

Number of Z_2^n periodograms that were averaged to obtain z2

stingray.stats.z2_n_probability(z2, n, ntrial=1, n_summed_spectra=1)[source]

Calculate the probability of a certain folded profile, due to noise.

Parameters:
z2float

A Z^2_n statistics value

nint, default 2

The n in $Z^2_n$ (number of harmonics, including the fundamental)

Returns:
pfloat

The probability that the Z^2_n value has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

n_summed_spectraint

Number of Z_2^n periodograms that were averaged to obtain z2

GTI Functionality

stingray.gti.append_gtis(gti0, gti1)[source]

Union of two non-overlapping GTIs.

If the two GTIs “touch”, this is tolerated and the touching GTIs are joined in a single one.

Parameters:
gti0: 2-d float array

List of GTIs of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

gti1: 2-d float array

List of GTIs of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

Returns:
gti: 2-d float array

The newly created GTI array.

Examples

>>> assert np.allclose(append_gtis([[0, 1]], [[2, 3]]), [[0, 1], [2, 3]])
>>> np.allclose(append_gtis([[0, 1], [4, 5]], [[2, 3]]),
...             [[0, 1], [2, 3], [4, 5]])
True
>>> assert np.allclose(append_gtis([[0, 1]], [[1, 3]]), [[0, 3]])
stingray.gti.bin_intervals_from_gtis(gtis, segment_size, time, dt=None, fraction_step=1, epsilon=0.001)[source]

Compute start/stop times of equal time intervals, compatible with GTIs, and map them to the indices of an array of time stamps.

Used to start each FFT/PDS/cospectrum from the start of a GTI, and stop before the next gap in data (end of GTI). In this case, it is necessary to specify the time array containing the times of the light curve bins. Returns start and stop bins of the intervals to use for the PDS.

Parameters:
gtis2-d float array

List of GTIs of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

segment_sizefloat

Length of each time segment.

timearray-like

Array of time stamps.

Returns:
spectrum_start_binsarray-like

List of starting bins in the original time array to use in spectral calculations.

spectrum_stop_binsarray-like

List of end bins to use in the spectral calculations.

Other Parameters:
dtfloat, default median(diff(time))

Time resolution of the light curve.

epsilonfloat, default 0.001

The tolerance, in fraction of dt, for the comparisons at the borders.

fraction_stepfloat

If the step is not a full segment_size but less (e.g. a moving window), this indicates the ratio between step step and segment_size (e.g. 0.5 means that the window shifts by half segment_size).

Examples

>>> time = np.arange(0.5, 13.5)
>>> gtis = [[0, 5], [6, 8], [9, 10]]
>>> segment_size = 2
>>> start_bins, stop_bins = bin_intervals_from_gtis(gtis,segment_size,time)
>>> assert np.allclose(start_bins, [0, 2, 6])
>>> assert np.allclose(stop_bins, [2, 4, 8])
>>> assert np.allclose(time[start_bins[0]:stop_bins[0]], [0.5, 1.5])
>>> assert np.allclose(time[start_bins[1]:stop_bins[1]], [2.5, 3.5])
stingray.gti.check_gtis(gti)[source]

Check if GTIs are well-behaved.

Check that:

  1. the shape of the GTI array is correct;

  2. no start > end

  3. no overlaps.

Parameters:
gtilist

A list of GTI (start, stop) pairs extracted from the FITS file.

Raises:
TypeError

If GTIs are of the wrong shape

ValueError

If GTIs have overlapping or displaced values

stingray.gti.check_separate(gti0, gti1)[source]

Check if two GTIs do not overlap.

Parameters:
gti0: 2-d float array

List of GTIs of form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

gti1: 2-d float array

List of GTIs of form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

Returns:
separate: bool

True if GTIs are mutually exclusive, False if not.

Examples

>>> gti0 = [[0, 10]]
>>> gti1 = [[20, 30]]
>>> assert check_separate(gti0, gti1)
>>> gti0 = [[0, 10]]
>>> gti1 = [[0, 10]]
>>> check_separate(gti0, gti1)
False
>>> gti0 = [[0, 10]]
>>> gti1 = [[10, 20]]
>>> assert check_separate(gti0, gti1)
>>> gti0 = [[0, 11]]
>>> gti1 = [[10, 20]]
>>> check_separate(gti0, gti1)
False
>>> gti0 = [[0, 11]]
>>> gti1 = [[10, 20]]
>>> check_separate(gti1, gti0)
False
>>> gti0 = [[0, 10], [30, 40]]
>>> gti1 = [[11, 28]]
>>> assert check_separate(gti0, gti1)
stingray.gti.create_gti_from_condition(time, condition, safe_interval=0, dt=None)[source]

Create a GTI list from a time array and a boolean mask (condition).

Parameters:
timearray-like

Array containing time stamps.

conditionarray-like

An array of bools, of the same length of time. A possible condition can be, e.g., the result of lc > 0.

Returns:
gtis[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]

The newly created GTIs.

Other Parameters:
safe_intervalfloat or [float, float]

A safe interval to exclude at both ends (if single float) or the start and the end (if pair of values) of GTIs.

dtfloat

The width (in sec) of each bin of the time array. Can be irregular.

stingray.gti.create_gti_mask(time, gtis, safe_interval=None, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001)[source]

Create GTI mask.

Assumes that no overlaps are present between GTIs

Parameters:
timenumpy.ndarray

An array of time stamps

gtis[[g0_0, g0_1], [g1_0, g1_1], ...], float array-like

The list of GTIs

Returns:
maskbool array

A mask labelling all time stamps that are included in the GTIs versus those that are not.

new_gtisNx2 array

An array of new GTIs created by this function.

Other Parameters:
safe_intervalfloat or [float, float], default None

A safe interval to exclude at both ends (if single float) or the start and the end (if pair of values) of GTIs. If None, no safe interval is applied to data.

min_lengthfloat

An optional minimum length for the GTIs to be applied. Only GTIs longer than min_length will be considered when creating the mask.

return_new_gtisbool

If True`, return the list of new GTIs (if min_length > 0)

dtfloat

Time resolution of the data, i.e. the interval between time stamps.

epsilonfloat

Fraction of dt that is tolerated at the borders of a GTI.

stingray.gti.create_gti_mask_complete(time, gtis, safe_interval=0, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001)[source]

Create GTI mask, allowing for non-constant dt.

Assumes that no overlaps are present between GTIs.

Parameters:
timenumpy.ndarray

An array of time stamps.

gtis[[g0_0, g0_1], [g1_0, g1_1], ...], float array-like

The list of GTIs.

Returns:
maskbool array

A mask labelling all time stamps that are included in the GTIs versus those that are not.

new_gtisNx2 array

An array of new GTIs created by this function.

Other Parameters:
safe_intervalfloat or [float, float]

A safe interval to exclude at both ends (if single float) or the start and the end (if pair of values) of GTIs.

min_lengthfloat

An optional minimum length for the GTIs to be applied. Only GTIs longer than min_length will be considered when creating the mask.

return_new_gtisbool

If True, return the list of new GTIs (if min_length > 0).

dtfloat

Time resolution of the data, i.e. the interval between time stamps.

epsilonfloat

Fraction of dt that is tolerated at the borders of a GTI.

stingray.gti.create_gti_mask_jit(time, gtis, mask, gti_mask, min_length=0)[source]

Compiled and fast function to create GTI mask.

Parameters:
timenumpy.ndarray

An array of time stamps

gtisiterable of (start, stop) pairs

The list of GTIs.

masknumpy.ndarray

A pre-assigned array of zeros of the same shape as time Records whether a time stamp is part of the GTIs.

gti_masknumpy.ndarray

A pre-assigned array zeros in the same shape as time; records start/stop of GTIs.

min_lengthfloat

An optional minimum length for the GTIs to be applied. Only GTIs longer than min_length will be considered when creating the mask.

stingray.gti.cross_gtis(gti_list)[source]

From multiple GTI lists, extract the common intervals EXACTLY.

Parameters:
gti_listarray-like

List of GTI arrays, each one in the usual format [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

Returns:
gti0: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] The newly created GTIs.

See also

cross_two_gtis

Extract the common intervals from two GTI lists EXACTLY

Examples

>>> gti1 = np.array([[1, 2]])
>>> gti2 = np.array([[1, 2]])
>>> newgti = cross_gtis([gti1, gti2])
>>> assert np.allclose(newgti, [[1, 2]])
>>> gti1 = np.array([[1, 4]])
>>> gti2 = np.array([[1, 2], [2, 4]])
>>> newgti = cross_gtis([gti1, gti2])
>>> assert np.allclose(newgti, [[1, 4]])
stingray.gti.cross_two_gtis(gti0, gti1)[source]

Extract the common intervals from two GTI lists EXACTLY.

Parameters:
gti0iterable of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
gti1iterable of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]

The two lists of GTIs to be crossed.

Returns:
gtis[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]

The newly created GTIs.

See also

cross_gtis

From multiple GTI lists, extract common intervals EXACTLY

Examples

>>> gti1 = np.array([[1, 2]])
>>> gti2 = np.array([[1, 2]])
>>> newgti = cross_two_gtis(gti1, gti2)
>>> assert np.allclose(newgti, [[1, 2]])
>>> gti1 = np.array([[1, 4]])
>>> gti2 = np.array([[1, 2], [2, 4]])
>>> newgti = cross_two_gtis(gti1, gti2)
>>> assert np.allclose(newgti, [[1, 4]])
>>> gti1 = np.array([[1, 2]])
>>> gti2 = np.array([[2, 3]])
>>> newgti = cross_two_gtis(gti1, gti2)
>>> len(newgti)
0
stingray.gti.generate_indices_of_gti_boundaries(times, gti, dt=0)[source]

Get the indices of events from different GTIs of the observation.

This is a generator, yielding the boundaries of each GTI and the corresponding indices in the time array.

Parameters:
timesfloat np.array

Array of times.

gti[[gti00, gti01], [gti10, gti11], …]

Good time intervals.

Yields:
g0: float

Start time of current GTI.

g1: float

End time of current GTI.

startidx: int

Start index of the current GTI in the time array.

stopidx: int

End index of the current GTI in the time array. Note that this is larger by one, so that time[startidx:stopidx] returns the correct time interval.

Other Parameters:
dtfloat

If times are uniformly binned, this is the binning time.

Examples

>>> times = [0.1, 0.2, 0.5, 0.8, 1.1]
>>> gtis = [[0, 0.55], [0.6, 2.1]]
>>> vals = generate_indices_of_gti_boundaries(times, gtis)
>>> v0 = next(vals)
>>> assert np.allclose(v0[:2], gtis[0])
>>> assert np.allclose(v0[2:], [0, 3])
stingray.gti.generate_indices_of_segment_boundaries_binned(times, gti, segment_size, dt=None)[source]

Get the indices of binned times from different segments of the observation.

This is a generator, yielding the boundaries of each segment and the corresponding indices in the time array

Parameters:
timesfloat np.array

Array of times, uniformly sampled

gti[[gti00, gti01], [gti10, gti11], …]

good time intervals

segment_sizefloat

length of segments

Yields:
t0: float

First time value, from the time array, in the current segment

t1: float

Last time value, from the time array, in the current segment

startidx: int

Start index of the current segment in the time array

stopidx: int

End index of the current segment in the time array. Note that this is larger by one, so that time[startidx:stopidx] returns the correct time interval.

Examples

>>> times = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7]
>>> gtis = [[0.05, 0.55]]
>>> vals = generate_indices_of_segment_boundaries_binned(times, gtis, 0.5, dt=0.1)
>>> v0 = next(vals)
>>> assert np.allclose(v0[:2], [0.05, 0.55])
>>> assert np.allclose(v0[2:], [0, 5])
stingray.gti.generate_indices_of_segment_boundaries_unbinned(times, gti, segment_size, check_sorted=True)[source]

Get the indices of events from different segments of the observation.

This is a generator, yielding the boundaries of each segment and the corresponding indices in the time array.

Parameters:
timesfloat np.array

Array of times.

gti[[gti00, gti01], [gti10, gti11], …]

Good time intervals.

segment_sizefloat

Length of segments.

Yields:
t0: float

Start time of current segment.

t1: float

End time of current segment.

startidx: int

Start index of the current segment in the time array.

stopidx: int

End index of the current segment in the time array. Note that this is larger by one, so that time[startidx:stopidx] returns the correct time interval.

Other Parameters:
check_sortedbool, default True

If True, checks that the time array is sorted.

Examples

>>> times = [0.1, 0.2, 0.5, 0.8, 1.1]
>>> gtis = [[0, 0.55], [0.6, 2.1]]
>>> vals = generate_indices_of_segment_boundaries_unbinned(
...    times, gtis, 0.5)
>>> v0 = next(vals)
>>> assert np.allclose(v0[:2], [0, 0.5])
>>> # Note: 0.5 is not included in the interval
>>> assert np.allclose(v0[2:], [0, 2])
>>> v1 = next(vals)
>>> assert np.allclose(v1[:2], [0.6, 1.1])
>>> # Again: 1.1 is not included in the interval
>>> assert np.allclose(v1[2:], [3, 4])
stingray.gti.get_btis(gtis, start_time=None, stop_time=None)[source]

From GTIs, obtain bad time intervals, i.e. the intervals not covered by the GTIs.

GTIs have to be well-behaved, in the sense that they have to pass check_gtis.

Parameters:
gtisiterable

A list of GTIs.

start_timefloat

Optional start time of the overall observation (e.g. can be earlier than the first time stamp in gtis).

stop_timefloat

Optional stop time of the overall observation (e.g. can be later than the last time stamp in``gtis``).

Returns:
btisnumpy.ndarray

A list of bad time intervals.

stingray.gti.get_gti_extensions_from_pattern(lchdulist, name_pattern='GTI')[source]

Gets the GTI extensions that match a given pattern.

Parameters:
lchdulist: `:class:astropy.io.fits.HDUList` object

The full content of a FITS file.

name_pattern: str

Pattern indicating all the GTI extensions.

Returns:
ext_list: list

List of GTI extension numbers whose name matches the input pattern.

Examples

>>> from astropy.io import fits
>>> start = np.arange(0, 300, 100)
>>> stop = start + 50.
>>> s1 = fits.Column(name='START', array=start, format='D')
>>> s2 = fits.Column(name='STOP', array=stop, format='D')
>>> hdu1 = fits.TableHDU.from_columns([s1, s2], name='GTI005XX')
>>> hdu2 = fits.TableHDU.from_columns([s1, s2], name='GTI00501')
>>> lchdulist = fits.HDUList([hdu1])
>>> gtiextn = get_gti_extensions_from_pattern(
...     lchdulist, name_pattern='GTI005[0-9]+')
>>> assert np.allclose(gtiextn, [1])
stingray.gti.get_gti_from_all_extensions(lchdulist, accepted_gtistrings=['GTI'], det_numbers=None)[source]

Intersect the GTIs from the all accepted extensions.

Parameters:
lchdulist: `:class:astropy.io.fits.HDUList` object

The full content of a FITS file.

accepted_gtistrings: list of str

Base strings of GTI extensions. For missions adding the detector number to GTI extensions like, e.g., XMM and Chandra, this function automatically adds the detector number and looks for all matching GTI extensions (e.g. “STDGTI” will also retrieve “STDGTI05”; “GTI0” will also retrieve “GTI00501”).

Returns:
gti_list: [[gti00, gti01], [gti10, gti11], …]

List of good time intervals, as the intersection of all matching GTIs. If there are two matching extensions, with GTIs [[0, 50], [100, 200]] and [[40, 70]] respectively, this function will return [[40, 50]].

Examples

>>> from astropy.io import fits
>>> s1 = fits.Column(name='START', array=[0, 100, 200], format='D')
>>> s2 = fits.Column(name='STOP', array=[50, 150, 250], format='D')
>>> hdu1 = fits.TableHDU.from_columns([s1, s2], name='GTI00501')
>>> s1 = fits.Column(name='START', array=[200, 300], format='D')
>>> s2 = fits.Column(name='STOP', array=[250, 350], format='D')
>>> hdu2 = fits.TableHDU.from_columns([s1, s2], name='STDGTI05')
>>> lchdulist = fits.HDUList([hdu1, hdu2])
>>> gti = get_gti_from_all_extensions(
...     lchdulist, accepted_gtistrings=['GTI0', 'STDGTI'],
...     det_numbers=[5])
>>> assert np.allclose(gti, [[200, 250]])
stingray.gti.get_gti_from_hdu(gtihdu)[source]

Get the GTIs from a given FITS extension.

Parameters:
gtihdu: `:class:astropy.io.fits.TableHDU` object

The GTI HDU.

Returns:
gti_list: [[gti00, gti01], [gti10, gti11], …]

List of good time intervals.

Examples

>>> from astropy.io import fits
>>> start = np.arange(0, 300, 100)
>>> stop = start + 50.
>>> s1 = fits.Column(name='START', array=start, format='D')
>>> s2 = fits.Column(name='STOP', array=stop, format='D')
>>> hdu1 = fits.TableHDU.from_columns([s1, s2], name='GTI00501')
>>> gti = get_gti_from_hdu(hdu1)
>>> assert np.allclose(gti, [[0, 50], [100, 150], [200, 250]])
stingray.gti.get_gti_lengths(gti)[source]

Calculate the length of each Good Time Interval.

Parameters:
gti[[gti00, gti01], [gti10, gti11], …]

The list of good time intervals.

Returns:
lengthsnp.ndarray

List of GTI lengths.

Examples

>>> gti = [[0, 1000], [1000, 1001], [3000, 3020]]
>>> assert np.allclose(get_gti_lengths(gti), [1000, 1, 20])
stingray.gti.get_total_gti_length(gti, minlen=0)[source]

Calculate the total exposure during Good Time Intervals.

Parameters:
gti[[gti00, gti01], [gti10, gti11], …]

The list of good time intervals.

minlenfloat

Minimum GTI length to consider.

Returns:
lengthfloat

The total exposure during GTIs.

Examples

>>> gti = [[0, 1000], [1000, 1001], [3000, 3020]]
>>> assert np.isclose(get_total_gti_length(gti), 1021)
>>> assert np.isclose(get_total_gti_length(gti, minlen=5), 1020)
stingray.gti.gti_border_bins(gtis, time, dt=None, epsilon=0.001)[source]

Find the indices in a time array corresponding to the borders of GTIs.

GTIs shorter than the bin time are not returned.

Parameters:
gtis2-d float array

List of GTIs of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

timearray-like

Array of time stamps.

Returns:
spectrum_start_binsarray-like

List of starting bins of each GTI

spectrum_stop_binsarray-like

List of stop bins of each GTI. The elements corresponding to these bins should not be included.

Other Parameters:
dtfloat or array of floats. Default median(diff(time))

Time resolution of the light curve. Can be an array of the same dimension as time

epsilonfloat, default 0.001

The tolerance, in fraction of dt, for the comparisons at the borders.

fraction_stepfloat

If the step is not a full segment_size but less (e.g. a moving window), this indicates the ratio between step step and segment_size (e.g. 0.5 means that the window shifts by half segment_size).

Examples

>>> times = np.arange(0.5, 13.5)
>>> gti_border_bins([[16., 18.]], times)
Traceback (most recent call last):
    ...
ValueError: Invalid time interval for the given GTIs
>>> start_bins, stop_bins = gti_border_bins(
...    [[0, 5], [6, 8]], times)
>>> assert np.allclose(start_bins, [0, 6])
>>> assert np.allclose(stop_bins, [5, 8])
>>> assert np.allclose(times[start_bins[0]:stop_bins[0]], [0.5, 1.5, 2.5, 3.5, 4.5])
>>> assert np.allclose(times[start_bins[1]:stop_bins[1]], [6.5, 7.5])
>>> start_bins, stop_bins = gti_border_bins(
...    [[0, 5], [6, 13]], times, dt=np.ones_like(times))
>>> assert np.allclose(start_bins, [0, 6])
>>> assert np.allclose(stop_bins, [5, 13])
>>> assert np.allclose(times[start_bins[0]:stop_bins[0]], [0.5, 1.5, 2.5, 3.5, 4.5])
>>> np.allclose(times[start_bins[1]:stop_bins[1]], [6.5, 7.5, 8.5, 9.5, 10.5, 11.5, 12.5])
True
stingray.gti.join_gtis(gti0, gti1)[source]

Union of two GTIs.

If GTIs are mutually exclusive, it calls append_gtis. Otherwise we put the extremes of partially overlapping GTIs on an ideal line and look at the number of opened and closed intervals. When the number of closed and opened intervals is the same, the full GTI is complete and we close it.

In practice, we assign to each opening time of a GTI the value -1, and the value 1 to each closing time; when the cumulative sum is zero, the GTI has ended. The timestamp after each closed GTI is the start of a new one.

(g_all)    0     1     2     3     4     5     6     7     8     9
(cumsum)   -1   -2    -1     0    -1    -2     -1   -2    -1     0
GTI A      |-----:-----|     :     |-----:-----|     |-----:-----|
FINAL GTI  |-----:-----------|     |-----:-----------------:-----|
GTI B            |-----------|           |-----------------|

In case one GTI ends exactly where another one starts, the cumulative sum is 0 but we do not want to close. In this case, we make a check that the next element of the sequence is not equal to the one where we would close.

(g_all)    0    1,1         3,3          5
(cumsum)   -1   0,-1       -1,-2         0
GTI A      |-----|           |-----------|
FINAL GTI  |-----------------------------|
GTI B            |-----------|
Parameters:
gti0: 2-d float array

List of GTIs of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]

gti1: 2-d float array

List of GTIs of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]

Returns:
gti: 2-d float array

The newly created GTI

stingray.gti.load_gtis(fits_file, gtistring=None)[source]

Load Good Time Intervals (GTIs) from HDU EVENTS of file fits_file. File is expected to be in FITS format.

Parameters:
fits_filestr

File name and path for the FITS file with the GTIs to be loaded.

gtistringstr

If the name of the FITS extension with the GTIs is not GTI, the alternative name can be set with this parameter.

Returns:
gti_listlist

A list of GTI (start, stop) pairs extracted from the FITS file.

stingray.gti.time_intervals_from_gtis(gtis, segment_size, fraction_step=1, epsilon=1e-05)[source]

Compute start/stop times of equal time intervals, compatible with GTIs.

Used to start each FFT/PDS/cospectrum from the start of a GTI, and stop before the next gap in data (end of GTI).

Parameters:
gtis2-d float array

List of GTIs of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]

segment_sizefloat

Length of the time segments

fraction_stepfloat

If the step is not a full segment_size but less (e.g. a moving window), this indicates the ratio between step step and segment_size (e.g. 0.5 means that the window shifts by half segment_size).

Returns:
spectrum_start_timesarray-like

List of starting times to use in the spectral calculations.

spectrum_stop_timesarray-like

List of end times to use in the spectral calculations.

I/O Functionality

stingray.io.common_name(str1, str2, default='common')[source]

Strip two strings of the letters not in common.

Filenames must be of same length and only differ by a few letters.

Parameters:
str1str
str2str
Returns:
common_strstr

A string containing the parts of the two names in common

Other Parameters:
defaultstr

The string to return if common_str is empty

stingray.io.get_file_extension(fname)[source]

Get the extension from the file name.

If g-zipped, add ‘.gz’ to extension.

Examples

>>> get_file_extension('ciao.tar')
'.tar'
>>> get_file_extension('ciao.tar.gz')
'.tar.gz'
>>> get_file_extension('ciao.evt.gz')
'.evt.gz'
>>> get_file_extension('ciao.a.tutti.evt.gz')
'.evt.gz'
stingray.io.get_key_from_mission_info(info, key, default, inst=None, mode=None)[source]

Get the name of a header key or table column from the mission database.

Many entries in the mission database have default values that can be altered for specific instruments or observing modes. Here, if there is a definition for a given instrument and mode, we take that, otherwise we use the default).

Parameters:
infodict

Nested dictionary containing all the information for a given mission. It can be nested, e.g. contain some info for a given instrument, and for each observing mode of that instrument.

keystr

The key to read from the info dictionary

defaultobject

The default value. It can be of any type, depending on the expected type for the entry.

Returns:
retvalobject

The wanted entry from the info dictionary

Other Parameters:
inststr

Instrument

modestr

Observing mode

Examples

>>> info = {'ecol': 'PI', "A": {"ecol": "BLA"}, "C": {"M1": {"ecol": "X"}}}
>>> get_key_from_mission_info(info, "ecol", "BU", inst="A", mode=None)
'BLA'
>>> get_key_from_mission_info(info, "ecol", "BU", inst="B", mode=None)
'PI'
>>> get_key_from_mission_info(info, "ecol", "BU", inst="A", mode="M1")
'BLA'
>>> get_key_from_mission_info(info, "ecol", "BU", inst="C", mode="M1")
'X'
>>> get_key_from_mission_info(info, "ghghg", "BU", inst="C", mode="M1")
'BU'
stingray.io.high_precision_keyword_read(hdr, keyword)[source]

Read FITS header keywords, also if split in two.

In the case where the keyword is split in two, like

MJDREF = MJDREFI + MJDREFF

in some missions, this function returns the summed value. Otherwise, the content of the single keyword

Parameters:
hdrdict_like

The FITS header structure, or a dictionary

keywordstr

The key to read in the header

Returns:
valuelong double

The value of the key, or None if something went wrong

stingray.io.lcurve_from_fits(fits_file, gtistring='GTI', timecolumn='TIME', ratecolumn=None, ratehdu=1, fracexp_limit=0.9, outfile=None, noclobber=False, outdir=None)[source]

Load a lightcurve from a fits file.

Note

FITS light curve handling is still under testing. Absolute times might be incorrect depending on the light curve format.

Parameters:
fits_filestr

File name of the input light curve in FITS format

Returns:
datadict

Dictionary containing all information needed to create a stingray.Lightcurve object

Other Parameters:
gtistringstr

Name of the GTI extension in the FITS file

timecolumnstr

Name of the column containing times in the FITS file

ratecolumnstr

Name of the column containing rates in the FITS file

ratehdustr or int

Name or index of the FITS extension containing the light curve

fracexp_limitfloat

Minimum exposure fraction allowed

noclobberbool

If True, do not overwrite existing files

stingray.io.load_events_and_gtis(fits_file, additional_columns=None, gtistring=None, gti_file=None, hduname=None, column=None)[source]

Load event lists and GTIs from one or more files.

Loads event list from HDU EVENTS of file fits_file, with Good Time intervals. Optionally, returns additional columns of data from the same HDU of the events.

Parameters:
fits_filestr
Returns:
retvalsObject with the following attributes:
ev_listarray-like

Event times in Mission Epoch Time

gti_list: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

GTIs in Mission Epoch Time

additional_data: dict

A dictionary, where each key is the one specified in additional_colums. The data are an array with the values of the specified column in the fits file.

t_startfloat

Start time in Mission Epoch Time

t_stopfloat

Stop time in Mission Epoch Time

pi_listarray-like

Raw Instrument energy channels

cal_pi_listarray-like

Calibrated PI channels (those that can be easily converted to energy values, regardless of the instrument setup.)

energy_listarray-like

Energy of each photon in keV (only for NuSTAR, NICER, XMM)

instrstr

Name of the instrument (e.g. EPIC-pn or FPMA)

missionstr

Name of the instrument (e.g. XMM or NuSTAR)

mjdreffloat

MJD reference time for the mission

headerstr

Full header of the FITS file, for debugging purposes

detector_idarray-like, int

Detector id for each photon (e.g. each of the CCDs composing XMM’s or Chandra’s instruments)

Other Parameters:
additional_columns: list of str, optional

A list of keys corresponding to the additional columns to extract from the event HDU (ex.: [‘PI’, ‘X’])

gtistringstr

Comma-separated list of accepted GTI extensions (default GTI,STDGTI), with or without appended integer number denoting the detector

gti_filestr, default None

External GTI file

hdunamestr or int, default 1

Name of the HDU containing the event list

columnstr, default None

The column containing the time values. If None, we use the name specified in the mission database, and if there is nothing there, “TIME”

return_limits: bool, optional

Return the TSTART and TSTOP keyword values

stingray.io.mkdir_p(path)[source]

Safe mkdir function

Parameters:
pathstr

The absolute path to the directory to be created

stingray.io.pi_to_energy(pis, rmf_file)[source]

Read the energy channels corresponding to the given PI channels.

Parameters:
pisarray-like

The channels to lookup in the rmf

Other Parameters:
rmf_filestr

The rmf file used to read the calibration.

stingray.io.read_header_key(fits_file, key, hdu=1)[source]

Read the header key key from HDU hdu of the file fits_file.

Parameters:
fits_file: str

The file name and absolute path to the event file.

key: str

The keyword to be read

Returns:
valueobject

The value stored under key in fits_file

Other Parameters:
hduint

Index of the HDU extension from which the header key to be read.

stingray.io.read_mission_info(mission=None)[source]

Search the relevant information about a mission in xselect.mdb.

stingray.io.read_rmf(rmf_file)[source]

Load RMF info.

Note

Preliminary: only EBOUNDS are read.

Parameters:
rmf_filestr

The rmf file used to read the calibration.

Returns:
pisarray-like

the PI channels

e_minsarray-like

the lower energy bound of each PI channel

e_maxsarray-like

the upper energy bound of each PI channel

stingray.io.ref_mjd(fits_file, hdu=1)[source]

Read MJDREFF, MJDREFI or, if failed, MJDREF, from the FITS header.

Parameters:
fits_filestr

The file name and absolute path to the event file.

Returns:
mjdrefnumpy.longdouble

the reference MJD

Other Parameters:
hduint

Index of the HDU extension from which the header key to be read.

stingray.io.rough_calibration(pis, mission)[source]

Make a rough conversion between PI channel and energy.

Only works for NICER, NuSTAR, IXPE, and XMM.

Parameters:
pis: float or array of floats

PI channels in data

mission: str

Mission name

Returns:
energiesfloat or array of floats

Energy values

Examples

>>> rough_calibration(0, 'nustar')
1.6
>>> rough_calibration(0.0, 'ixpe')
0.0
>>> # It's case-insensitive
>>> rough_calibration(1200, 'XMm')
1.2
>>> rough_calibration(10, 'asDf')
Traceback (most recent call last):
    ...
ValueError: Mission asdf not recognized
>>> rough_calibration(100, 'nicer')
1.0
stingray.io.savefig(filename, **kwargs)[source]

Save a figure plotted by matplotlib.

Note : This function is supposed to be used after the plot function. Otherwise it will save a blank image with no plot.

Parameters:
filenamestr

The name of the image file. Extension must be specified in the file name. For example filename with png extension will give a rasterized image while .pdf extension will give a vectorized output.

kwargskeyword arguments

Keyword arguments to be passed to savefig function of matplotlib.pyplot. For example use bbox_inches='tight' to remove the undesirable whitespace around the image.

stingray.io.split_numbers(number, shift=0)[source]

Split high precision number(s) into doubles.

You can specify the number of shifts to move the decimal point.

Parameters:
number: long double

The input high precision number which is to be split

Returns:
number_I: double

First part of high precision number

number_F: double

Second part of high precision number

Other Parameters:
shift: integer

Move the cut by shift decimal points to the right (left if negative)

Examples

>>> n = 12.34
>>> i, f = split_numbers(n)
>>> assert i == 12
>>> assert np.isclose(f, 0.34)
>>> assert np.allclose(split_numbers(n, 2), (12.34, 0.0))
>>> assert np.allclose(split_numbers(n, -1), (10.0, 2.34))

Other Utility Functions

stingray.utils.baseline_als(x, y, lam=None, p=None, niter=10, return_baseline=False, offset_correction=False)[source]

Baseline Correction with Asymmetric Least Squares Smoothing.

Parameters:
xarray-like

the sample time/number/position

yarray-like

the data series corresponding to x

lamfloat

the lambda parameter of the ALS method. This control how much the baseline can adapt to local changes. A higher value corresponds to a stiffer baseline

pfloat

the asymmetry parameter of the ALS method. This controls the overall slope tolerated for the baseline. A higher value correspond to a higher possible slope

Returns:
y_subtractedarray-like, same size as y

The initial time series, subtracted from the trend

baselinearray-like, same size as y

Fitted baseline. Only returned if return_baseline is True

Other Parameters:
niterint

The number of iterations to perform

return_baselinebool

return the baseline?

offset_correctionbool

also correct for an offset to align with the running mean of the scan

Examples

>>> x = np.arange(0, 10, 0.01)
>>> y = np.zeros_like(x) + 10
>>> ysub = baseline_als(x, y)
>>> assert np.all(ysub < 0.001)
stingray.utils.check_isallfinite(array)[source]

Check if all elements of an array are finite.

Calls _check_isallfinite_numba if numba is installed, otherwise it uses np.isfinite.

Examples

>>> assert check_isallfinite([1, 2, 3])
>>> check_isallfinite([1, np.inf, 3])
False
>>> check_isallfinite([1, np.nan, 3])
False
stingray.utils.contiguous_regions(condition)[source]

Find contiguous True regions of the boolean array condition.

Return a 2D array where the first column is the start index of the region and the second column is the end index, found on [so-contiguous].

Parameters:
conditionbool array
Returns:
idx[[i0_0, i0_1], [i1_0, i1_1], ...]

A list of integer couples, with the start and end of each True blocks in the original array

Notes

stingray.utils.create_window(N, window_type='uniform')[source]

A method to create window functions commonly used in signal processing.

Windows supported are: Hamming, Hanning, uniform (rectangular window), triangular window, blackmann window among others.

Parameters:
Nint

Total number of data points in window. If negative, abs is taken.

window_type{uniform, parzen, hamming, hanning, triangular, welch, blackmann, flat-top}, optional, default uniform

Type of window to create.

Returns:
window: numpy.ndarray

Window function of length N.

stingray.utils.excess_variance(lc, normalization='fvar')[source]

Calculate the excess variance.

Vaughan et al. 2003, MNRAS 345, 1271 give three measurements of source intrinsic variance: if a light curve has a total variance of \(S^2\), and each point has an error bar \(\sigma_{err}\), the excess variance is defined as

\[\sigma_{XS} = S^2 - \overline{\sigma_{err}}^2;\]

the normalized excess variance is the excess variance divided by the square of the mean intensity:

\[\sigma_{NXS} = \dfrac{\sigma_{XS}}{\overline{x}^2};\]

the fractional mean square variability amplitude, or \(F_{var}\), is finally defined as

\[F_{var} = \sqrt{\dfrac{\sigma_{XS}}{\overline{x}^2}}\]
Parameters:
lca Lightcurve object
normalizationstr

if fvar, return the fractional mean square variability \(F_{var}\). If none, return the unnormalized excess variance variance \(\sigma_{XS}\). If norm_xs, return the normalized excess variance \(\sigma_{XS}\)

Returns
——-
var_xsfloat
var_xs_errfloat
stingray.utils.find_nearest(array, value, side='left')[source]

Return the array value that is closest to the input value (Abigail Stevens: Thanks StackOverflow!)

Parameters:
arraynp.array of ints or floats

1-D array of numbers to search through. Should already be sorted from low values to high values.

valueint or float

The value you want to find the closest to in the array.

Returns:
array[idx]int or float

The array value that is closest to the input value.

idxint

The index of the array of the closest value.

Other Parameters:
sidestr

Look at the numpy.searchsorted documentation for more information.

stingray.utils.get_random_state(random_state=None)[source]

Return a Mersenne Twister pseudo-random number generator.

Parameters:
seedinteger or numpy.random.RandomState, optional, default None
Returns:
random_statemtrand.RandomState object
stingray.utils.heaviside(x)[source]

Heaviside function. Returns 1 if x>0, and 0 otherwise.

Examples

>>> heaviside(2)
1
>>> heaviside(-1)
0
stingray.utils.is_int(obj)[source]

Test if object is an integer.

stingray.utils.is_iterable(var)[source]

Test if a variable is an iterable.

Parameters:
varobject

The variable to be tested for iterably-ness

Returns:
is_iterbool

Returns True if var is an Iterable, False otherwise

stingray.utils.is_string(s)[source]

Portable function to answer whether a variable is a string.

Parameters:
sobject

An object that is potentially a string

Returns:
isstringbool

A boolean decision on whether s is a string or not

stingray.utils.look_for_array_in_array(array1, array2)[source]

Find a subset of values in an array.

Parameters:
array1iterable

An array with values to be searched

array2iterable

A second array which potentially contains a subset of values also contained in array1

Returns ——- array3iterable An array with the subset of values
contained in both ``array1`` and ``array2``
stingray.utils.nearest_power_of_two(x)[source]

Return a number which is nearest to x and is the integral power of two.

Parameters:
xint, float
Returns:
x_nearestint

Number closest to x and is the integral power of two.

stingray.utils.optimal_bin_time(fftlen, tbin)[source]

Vary slightly the bin time to have a power of two number of bins.

Given an FFT length and a proposed bin time, return a bin time slightly shorter than the original, that will produce a power-of-two number of FFT bins.

Parameters:
fftlenint

Number of positive frequencies in a proposed Fourier spectrum

tbinfloat

The proposed time resolution of a light curve

Returns:
resfloat

A time resolution that will produce a Fourier spectrum with fftlen frequencies and a number of FFT bins that are a power of two

stingray.utils.order_list_of_arrays(data, order)[source]

Sort an array according to the specified order.

Parameters:
dataiterable
Returns:
datalist or dict
stingray.utils.poisson_symmetrical_errors(counts)[source]

Optimized version of frequentist symmetrical errors.

Uses a lookup table in order to limit the calls to poisson_conf_interval

Parameters:
countsiterable

An array of Poisson-distributed numbers

Returns:
errnumpy.ndarray

An array of uncertainties associated with the Poisson counts in counts

Examples

>>> from astropy.stats import poisson_conf_interval
>>> counts = np.random.randint(0, 1000, 100)
>>> # ---- Do it without the lookup table ----
>>> err_low, err_high = poisson_conf_interval(np.asarray(counts),
...                 interval='frequentist-confidence', sigma=1)
>>> err_low -= np.asarray(counts)
>>> err_high -= np.asarray(counts)
>>> err = (np.absolute(err_low) + np.absolute(err_high))/2.0
>>> # Do it with this function
>>> err_thisfun = poisson_symmetrical_errors(counts)
>>> # Test that results are always the same
>>> assert np.allclose(err_thisfun, err)
stingray.utils.rebin_data(x, y, dx_new, yerr=None, method='sum', dx=None)[source]

Rebin some data to an arbitrary new data resolution. Either sum the data points in the new bins or average them.

Parameters:
x: iterable

The dependent variable with some resolution, which can vary throughout the time series.

y: iterable

The independent variable to be binned

dx_new: float

The new resolution of the dependent variable x

Returns:
xbin: numpy.ndarray

The midpoints of the new bins in x

ybin: numpy.ndarray

The binned quantity y

ybin_err: numpy.ndarray

The uncertainties of the binned values of y.

step_size: float

The size of the binning step

Other Parameters:
yerr: iterable, optional

The uncertainties of y, to be propagated during binning.

method: {``sum`` | ``average`` | ``mean``}, optional, default ``sum``

The method to be used in binning. Either sum the samples y in each new bin of x, or take the arithmetic mean.

dx: float

The old resolution (otherwise, calculated from difference between time bins)

Examples

>>> x = np.arange(0, 100, 0.01)
>>> y = np.ones(x.size)
>>> yerr = np.ones(x.size)
>>> xbin, ybin, ybinerr, step_size = rebin_data(
...     x, y, 4, yerr=yerr, method='sum', dx=0.01)
>>> assert np.allclose(ybin, 400)
>>> assert np.allclose(ybinerr, 20)
>>> xbin, ybin, ybinerr, step_size = rebin_data(
...     x, y, 4, yerr=yerr, method='mean')
>>> assert np.allclose(ybin, 1)
>>> assert np.allclose(ybinerr, 0.05)
stingray.utils.rebin_data_log(x, y, f, y_err=None, dx=None)[source]

Logarithmic re-bin of some data. Particularly useful for the power spectrum.

The new dependent variable depends on the previous dependent variable modified by a factor f:

\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
x: iterable

The dependent variable with some resolution dx_old = x[1]-x[0]

y: iterable

The independent variable to be binned

f: float

The factor of increase of each bin wrt the previous one.

Returns:
xbin: numpy.ndarray

The midpoints of the new bins in x

ybin: numpy.ndarray

The binned quantity y

ybin_err: numpy.ndarray

The uncertainties of the binned values of y

step_size: float

The size of the binning step

Other Parameters:
yerr: iterable, optional

The uncertainties of y to be propagated during binning.

method: {``sum`` | ``average`` | ``mean``}, optional, default ``sum``

The method to be used in binning. Either sum the samples y in each new bin of x or take the arithmetic mean.

dx: float, optional

The binning step of the initial x

stingray.utils.simon(message, **kwargs)[source]

The Statistical Interpretation MONitor.

A warning system designed to always remind the user that Simon is watching him/her.

Parameters:
messagestring

The message that is thrown

kwargsdict

The rest of the arguments that are passed to warnings.warn

stingray.utils.standard_error(xs, mean)[source]

Return the standard error of the mean (SEM) of an array of arrays.

Parameters:
xs2-d float array

List of data point arrays.

mean1-d float array

Average of the data points.

Returns:
standard_error1-d float array

Standard error of the mean (SEM).

Modeling

This subpackage defines classes and functions related to parametric modelling of various types of data sets. Currently, most functionality is focused on modelling Fourier products (especially power spectra and averaged power spectra), but rudimentary functionality exists for modelling e.g. light curves.

Log-Likelihood Classes

These classes define basic log-likelihoods for modelling time series and power spectra. stingray.modeling.LogLikelihood is an abstract base class, i.e. a template for creating user-defined log-likelihoods and should not be instantiated itself. Based on this base class are several definitions for a stingray.modeling.GaussianLogLikelihood, appropriate for data with normally distributed uncertainties, a stingray.modeling.PoissonLogLikelihood appropriate for photon counting data, and a stingray.modeling.PSDLogLikelihood appropriate for (averaged) power spectra.

class stingray.modeling.LogLikelihood(x, y, model, **kwargs)[source]

Abstract Base Class defining the structure of a LogLikelihood object. This class cannot be called itself, since each statistical distribution has its own definition for the likelihood, which should occur in subclasses.

Parameters:
xiterable

x-coordinate of the data. Could be multi-dimensional.

yiterable

y-coordinate of the data. Could be multi-dimensional.

modelan astropy.modeling.FittableModel instance

Your model

kwargs

keyword arguments specific to the individual sub-classes. For details, see the respective docstrings for each subclass

abstract evaluate(parameters)[source]

This is where you define your log-likelihood. Do this, but do it in a subclass!

class stingray.modeling.GaussianLogLikelihood(x, y, yerr, model)[source]

Likelihood for data with Gaussian uncertainties. Astronomers also call this likelihood Chi-Squared, but be aware that this has nothing to do with the likelihood based on the Chi-square distribution, which is also defined as in of PSDLogLikelihood in this module!

Use this class here whenever your data has Gaussian uncertainties.

Parameters:
xiterable

x-coordinate of the data

yiterable

y-coordinte of the data

yerriterable

the uncertainty on the data, as standard deviation

modelan astropy.modeling.FittableModel instance

The model to use in the likelihood.

Attributes:
xiterable

x-coordinate of the data

yiterable

y-coordinte of the data

yerriterable

the uncertainty on the data, as standard deviation

modelan Astropy Model instance

The model to use in the likelihood.

nparint

The number of free parameters in the model

evaluate(pars, neg=False)[source]

Evaluate the Gaussian log-likelihood for a given set of parameters.

Parameters:
parsnumpy.ndarray

An array of parameters at which to evaluate the model and subsequently the log-likelihood. Note that the length of this array must match the free parameters in model, i.e. npar

negbool, optional, default False

If True, return the negative log-likelihood, i.e. -loglike, rather than loglike. This is useful e.g. for optimization routines, which generally minimize functions.

Returns:
loglikefloat

The log(likelihood) value for the data and model.

class stingray.modeling.PoissonLogLikelihood(x, y, model)[source]

Likelihood for data with uncertainties following a Poisson distribution. This is useful e.g. for (binned) photon count data.

Parameters:
xiterable

x-coordinate of the data

yiterable

y-coordinte of the data

modelan astropy.modeling.FittableModel instance

The model to use in the likelihood.

Attributes:
xiterable

x-coordinate of the data

yiterable

y-coordinte of the data

yerriterable

the uncertainty on the data, as standard deviation

modelan astropy.modeling.FittableModel instance

The model to use in the likelihood.

nparint

The number of free parameters in the model

evaluate(pars, neg=False)[source]

Evaluate the log-likelihood for a given set of parameters.

Parameters:
parsnumpy.ndarray

An array of parameters at which to evaluate the model and subsequently the log-likelihood. Note that the length of this array must match the free parameters in model, i.e. npar

negbool, optional, default False

If True, return the negative log-likelihood, i.e. -loglike, rather than loglike. This is useful e.g. for optimization routines, which generally minimize functions.

Returns:
loglikefloat

The log(likelihood) value for the data and model.

class stingray.modeling.PSDLogLikelihood(freq, power, model, m=1)[source]

A likelihood based on the Chi-square distribution, appropriate for modelling (averaged) power spectra. Note that this is not the same as the statistic astronomers commonly call Chi-Square, which is a fit statistic derived from the Gaussian log-likelihood, defined elsewhere in this module.

Parameters:
freqiterable

Array with frequencies

poweriterable

Array with (averaged/singular) powers corresponding to the frequencies in freq

modelan astropy.modeling.FittableModel instance

The model to use in the likelihood.

mint

1/2 of the degrees of freedom

Attributes:
xiterable

x-coordinate of the data

yiterable

y-coordinte of the data

yerriterable

the uncertainty on the data, as standard deviation

modelan astropy.modeling.FittableModel instance

The model to use in the likelihood.

nparint

The number of free parameters in the model

evaluate(pars, neg=False)[source]

Evaluate the log-likelihood for a given set of parameters.

Parameters:
parsnumpy.ndarray

An array of parameters at which to evaluate the model and subsequently the log-likelihood. Note that the length of this array must match the free parameters in model, i.e. npar

negbool, optional, default False

If True, return the negative log-likelihood, i.e. -loglike, rather than loglike. This is useful e.g. for optimization routines, which generally minimize functions.

Returns:
loglikefloat

The log(likelihood) value for the data and model.

class stingray.modeling.LaplaceLogLikelihood(x, y, yerr, model)[source]

A Laplace likelihood for the cospectrum.

Parameters:
xiterable

Array with independent variable

yiterable

Array with dependent variable

modelan astropy.modeling.FittableModel instance

The model to use in the likelihood.

yerriterable

Array with the uncertainties on y, in standard deviation

Attributes:
xiterable

x-coordinate of the data

yiterable

y-coordinte of the data

yerriterable

the uncertainty on the data, as standard deviation

modelan astropy.modeling.FittableModel instance

The model to use in the likelihood.

nparint

The number of free parameters in the model

evaluate(pars, neg=False)[source]

Evaluate the log-likelihood for a given set of parameters.

Parameters:
parsnumpy.ndarray

An array of parameters at which to evaluate the model and subsequently the log-likelihood. Note that the length of this array must match the free parameters in model, i.e. npar

negbool, optional, default False

If True, return the negative log-likelihood, i.e. -loglike, rather than loglike. This is useful e.g. for optimization routines, which generally minimize functions.

Returns:
loglikefloat

The log(likelihood) value for the data and model.


Posterior Classes

These classes define basic posteriors for parametric modelling of time series and power spectra, based on the log-likelihood classes defined in Log-Likelihood Classes. stingray.modeling.Posterior is an abstract base class laying out a basic template for defining posteriors. As with the log-likelihood classes above, several posterior classes are defined for a variety of data types.

Note that priors are not pre-defined in these classes, since they are problem dependent and should be set by the user. The convenience function stingray.modeling.set_logprior() can be useful to help set priors for these posterior classes.

class stingray.modeling.Posterior(x, y, model, **kwargs)[source]

Define a Posterior object.

The Posterior describes the Bayesian probability distribution of a set of parameters \(\theta\) given some observed data \(D\) and some prior assumptions \(I\).

It is defined as

\[p(\theta | D, I) = p(D | \theta, I) p(\theta | I)/p(D| I)\]

where \(p(D | \theta, I)\) describes the likelihood, i.e. the sampling distribution of the data and the (parametric) model, and \(p(\theta | I)\) describes the prior distribution, i.e. our information about the parameters \(\theta\) before we gathered the data. The marginal likelihood \(p(D| I)\) describes the probability of observing the data given the model assumptions, integrated over the space of all parameters.

Parameters:
xiterable

The abscissa or independent variable of the data. This could in principle be a multi-dimensional array.

yiterable

The ordinate or dependent variable of the data.

modelastropy.modeling.models instance

The parametric model supposed to represent the data. For details see the astropy.modeling documentation

kwargs

keyword arguments related to the subclasses of Posterior. For details, see the documentation of the individual subclasses

References

  • Sivia, D. S., and J. Skilling. “Data Analysis: A Bayesian Tutorial. 2006.”

  • Gelman, Andrew, et al. Bayesian data analysis. Vol. 2. Boca Raton, FL, USA: Chapman & Hall/CRC, 2014.

  • von Toussaint, Udo. “Bayesian inference in physics.” Reviews of Modern Physics 83.3 (2011): 943.

  • Hogg, David W. “Probability Calculus for inference”. arxiv: 1205.4446

logposterior(t0, neg=False)[source]

Definition of the log-posterior. Requires methods loglikelihood and logprior to both be defined.

Note that loglikelihood is set in the subclass of Posterior appropriate for your problem at hand, as is logprior.

Parameters:
t0numpy.ndarray

An array of parameters at which to evaluate the model and subsequently the log-posterior. Note that the length of this array must match the free parameters in model, i.e. npar

negbool, optional, default False

If True, return the negative log-posterior, i.e. -lpost, rather than lpost. This is useful e.g. for optimization routines, which generally minimize functions.

Returns:
lpostfloat

The value of the log-posterior for the given parameters t0

class stingray.modeling.GaussianPosterior(x, y, yerr, model, priors=None)[source]

A general class for two-dimensional data following a Gaussian sampling distribution.

Parameters:
xnumpy.ndarray

independent variable

ynumpy.ndarray

dependent variable

yerrnumpy.ndarray

measurement uncertainties for y

modelinstance of any subclass of astropy.modeling.FittableModel

The model for the power spectrum.

priorsdict of form {"parameter name": function}, optional

A dictionary with the definitions for the prior probabilities. For each parameter in model, there must be a prior defined with a key of the exact same name as stored in model.param_names. The item for each key is a function definition defining the prior (e.g. a lambda function or a scipy.stats.distribution.pdf. If priors = None, then no prior is set. This means priors need to be added by hand using the set_logprior() function defined in this module. Note that it is impossible to call a Posterior object itself or the self.logposterior method without defining a prior.

logposterior(t0, neg=False)

Definition of the log-posterior. Requires methods loglikelihood and logprior to both be defined.

Note that loglikelihood is set in the subclass of Posterior appropriate for your problem at hand, as is logprior.

Parameters:
t0numpy.ndarray

An array of parameters at which to evaluate the model and subsequently the log-posterior. Note that the length of this array must match the free parameters in model, i.e. npar

negbool, optional, default False

If True, return the negative log-posterior, i.e. -lpost, rather than lpost. This is useful e.g. for optimization routines, which generally minimize functions.

Returns:
lpostfloat

The value of the log-posterior for the given parameters t0

class stingray.modeling.PoissonPosterior(x, y, model, priors=None)[source]

Posterior for Poisson light curve data. Primary intended use is for modelling X-ray light curves, but alternative uses are conceivable.

Parameters:
xnumpy.ndarray

The independent variable (e.g. time stamps of a light curve)

ynumpy.ndarray

The dependent variable (e.g. counts per bin of a light curve)

modelinstance of any subclass of astropy.modeling.FittableModel

The model for the power spectrum.

priorsdict of form {"parameter name": function}, optional

A dictionary with the definitions for the prior probabilities. For each parameter in model, there must be a prior defined with a key of the exact same name as stored in model.param_names. The item for each key is a function definition defining the prior (e.g. a lambda function or a scipy.stats.distribution.pdf. If priors = None, then no prior is set. This means priors need to be added by hand using the set_logprior() function defined in this module. Note that it is impossible to call a Posterior object itself or the self.logposterior method without defining a prior.

Attributes:
xnumpy.ndarray

The independent variable (list of frequencies) stored in ps.freq

ynumpy.ndarray

The dependent variable (list of powers) stored in ps.power

modelinstance of any subclass of astropy.modeling.FittableModel

The model for the power spectrum.

logposterior(t0, neg=False)

Definition of the log-posterior. Requires methods loglikelihood and logprior to both be defined.

Note that loglikelihood is set in the subclass of Posterior appropriate for your problem at hand, as is logprior.

Parameters:
t0numpy.ndarray

An array of parameters at which to evaluate the model and subsequently the log-posterior. Note that the length of this array must match the free parameters in model, i.e. npar

negbool, optional, default False

If True, return the negative log-posterior, i.e. -lpost, rather than lpost. This is useful e.g. for optimization routines, which generally minimize functions.

Returns:
lpostfloat

The value of the log-posterior for the given parameters t0

class stingray.modeling.PSDPosterior(freq, power, model, priors=None, m=1)[source]

Posterior distribution for power spectra. Uses an exponential distribution for the errors in the likelihood, or a \(\chi^2\) distribution with \(2M\) degrees of freedom, where \(M\) is the number of frequency bins or power spectra averaged in each bin.

Parameters:
ps{stingray.Powerspectrum | stingray.AveragedPowerspectrum} instance

the stingray.Powerspectrum object containing the data

modelinstance of any subclass of astropy.modeling.FittableModel

The model for the power spectrum.

priorsdict of form {"parameter name": function}, optional

A dictionary with the definitions for the prior probabilities. For each parameter in model, there must be a prior defined with a key of the exact same name as stored in model.param_names. The item for each key is a function definition defining the prior (e.g. a lambda function or a scipy.stats.distribution.pdf. If priors = None, then no prior is set. This means priors need to be added by hand using the set_logprior() function defined in this module. Note that it is impossible to call a Posterior object itself or the self.logposterior method without defining a prior.

mint, default 1

The number of averaged periodograms or frequency bins in ps. Useful for binned/averaged periodograms, since the value of m will change the likelihood function!

Attributes:
ps{stingray.Powerspectrum | stingray.AveragedPowerspectrum} instance

the stingray.Powerspectrum object containing the data

xnumpy.ndarray

The independent variable (list of frequencies) stored in ps.freq

ynumpy.ndarray

The dependent variable (list of powers) stored in ps.power

modelinstance of any subclass of astropy.modeling.FittableModel

The model for the power spectrum.

logposterior(t0, neg=False)

Definition of the log-posterior. Requires methods loglikelihood and logprior to both be defined.

Note that loglikelihood is set in the subclass of Posterior appropriate for your problem at hand, as is logprior.

Parameters:
t0numpy.ndarray

An array of parameters at which to evaluate the model and subsequently the log-posterior. Note that the length of this array must match the free parameters in model, i.e. npar

negbool, optional, default False

If True, return the negative log-posterior, i.e. -lpost, rather than lpost. This is useful e.g. for optimization routines, which generally minimize functions.

Returns:
lpostfloat

The value of the log-posterior for the given parameters t0

class stingray.modeling.LaplacePosterior(x, y, yerr, model, priors=None)[source]

A general class for two-dimensional data following a Gaussian sampling distribution.

Parameters:
xnumpy.ndarray

independent variable

ynumpy.ndarray

dependent variable

yerrnumpy.ndarray

measurement uncertainties for y, in standard deviation

modelinstance of any subclass of astropy.modeling.FittableModel

The model for the power spectrum.

priorsdict of form {"parameter name": function}, optional

A dictionary with the definitions for the prior probabilities. For each parameter in model, there must be a prior defined with a key of the exact same name as stored in model.param_names. The item for each key is a function definition defining the prior (e.g. a lambda function or a scipy.stats.distribution.pdf. If priors = None, then no prior is set. This means priors need to be added by hand using the set_logprior() function defined in this module. Note that it is impossible to call a Posterior object itself or the self.logposterior method without defining a prior.

logposterior(t0, neg=False)

Definition of the log-posterior. Requires methods loglikelihood and logprior to both be defined.

Note that loglikelihood is set in the subclass of Posterior appropriate for your problem at hand, as is logprior.

Parameters:
t0numpy.ndarray

An array of parameters at which to evaluate the model and subsequently the log-posterior. Note that the length of this array must match the free parameters in model, i.e. npar

negbool, optional, default False

If True, return the negative log-posterior, i.e. -lpost, rather than lpost. This is useful e.g. for optimization routines, which generally minimize functions.

Returns:
lpostfloat

The value of the log-posterior for the given parameters t0


Parameter Estimation Classes

These classes implement functionality related to parameter estimation. They define basic fit and sample methods using scipy.optimize and emcee, respectively, for optimization and Markov Chain Monte Carlo sampling. stingray.modeling.PSDParEst implements some more advanced functionality for modelling power spectra, including both frequentist and Bayesian searches for (quasi-)periodic signals.

class stingray.modeling.ParameterEstimation(fitmethod='BFGS', max_post=True)[source]

Parameter estimation of two-dimensional data, either via optimization or MCMC. Note: optimization with bounds is not supported. If something like this is required, define (uniform) priors in the ParametricModel instances to be used below.

Parameters:
fitmethodstring, optional, default L-BFGS-B

Any of the strings allowed in scipy.optimize.minimize in the method keyword. Sets the fit method to be used.

max_postbool, optional, default True

If True, then compute the Maximum-A-Posteriori estimate. If False, compute a Maximum Likelihood estimate.

calibrate_lrt(lpost1, t1, lpost2, t2, sample=None, neg=True, max_post=False, nsim=1000, niter=200, nwalkers=500, burnin=200, namestr='test', seed=None)[source]

Calibrate the outcome of a Likelihood Ratio Test via MCMC.

In order to compare models via likelihood ratio test, one generally aims to compute a p-value for the null hypothesis (generally the simpler model). There are two special cases where the theoretical distribution used to compute that p-value analytically given the observed likelihood ratio (a chi-square distribution) is not applicable:

  • the models are not nested (i.e. Model 1 is not a special, simpler case of Model 2),

  • the parameter values fixed in Model 2 to retrieve Model 1 are at the edges of parameter space (e.g. if one must set, say, an amplitude to zero in order to remove a component in the more complex model, and negative amplitudes are excluded a priori)

In these cases, the observed likelihood ratio must be calibrated via simulations of the simpler model (Model 1), using MCMC to take into account the uncertainty in the parameters. This function does exactly that: it computes the likelihood ratio for the observed data, and produces simulations to calibrate the likelihood ratio and compute a p-value for observing the data under the assumption that Model 1 istrue.

If max_post=True, the code will use MCMC to sample the posterior of the parameters and simulate fake data from there.

If max_post=False, the code will use the covariance matrix derived from the fit to simulate data sets for comparison.

Parameters:
lpost1object of a subclass of Posterior

The Posterior object for model 1

t1iterable

The starting parameters for model 1

lpost2object of a subclass of Posterior

The Posterior object for model 2

t2iterable

The starting parameters for model 2

negbool, optional, default True

Boolean flag to decide whether to use the negative log-likelihood or log-posterior

max_post: bool, optional, default ``False``

If True, set the internal state to do the optimization with the log-likelihood rather than the log-posterior.

Returns:
pvaluefloat [0,1]

p-value ‘n stuff

compute_lrt(lpost1, t1, lpost2, t2, neg=True, max_post=False)[source]

This function computes the Likelihood Ratio Test between two nested models.

Parameters:
lpost1object of a subclass of Posterior

The Posterior object for model 1

t1iterable

The starting parameters for model 1

lpost2object of a subclass of Posterior

The Posterior object for model 2

t2iterable

The starting parameters for model 2

negbool, optional, default True

Boolean flag to decide whether to use the negative log-likelihood or log-posterior

max_post: bool, optional, default ``False``

If True, set the internal state to do the optimization with the log-likelihood rather than the log-posterior.

Returns:
lrtfloat

The likelihood ratio for model 2 and model 1

res1OptimizationResults object

Contains the result of fitting lpost1

res2OptimizationResults object

Contains the results of fitting lpost2

fit(lpost, t0, neg=True, scipy_optimize_options=None)[source]

Do either a Maximum-A-Posteriori (MAP) or Maximum Likelihood (ML) fit to the data.

MAP fits include priors, ML fits do not.

Parameters:
lpostPosterior (or subclass) instance

and instance of class Posterior or one of its subclasses that defines the function to be minimized (either in loglikelihood or logposterior)

t0{list | numpy.ndarray}

List/array with set of initial parameters

negbool, optional, default True

Boolean to be passed to lpost, setting whether to use the negative posterior or the negative log-likelihood. Useful for optimization routines, which are generally defined as minimization routines.

scipy_optimize_optionsdict, optional, default None

A dictionary with options for scipy.optimize.minimize, directly passed on as keyword arguments.

Returns:
resOptimizationResults object

An object containing useful summaries of the fitting procedure. For details, see documentation of class:OptimizationResults.

sample(lpost, t0, cov=None, nwalkers=500, niter=100, burnin=100, threads=1, print_results=True, plot=False, namestr='test', pool=False)[source]

Sample the Posterior distribution defined in lpost using MCMC. Here we use the emcee package, but other implementations could in principle be used.

Parameters:
lpostinstance of a Posterior subclass

and instance of class Posterior or one of its subclasses that defines the function to be minimized (either in loglikelihood or logposterior)

t0iterable

list or array containing the starting parameters. Its length must match lpost.model.npar.

nwalkersint, optional, default 500

The number of walkers (chains) to use during the MCMC procedure. The more walkers are used, the slower the estimation will be, but the better the final distribution is likely to be.

niterint, optional, default 100

The number of iterations to run the MCMC chains for. The larger this number, the longer the estimation will take, but the higher the chance that the walkers have actually converged on the true posterior distribution.

burninint, optional, default 100

The number of iterations to run the walkers before convergence is assumed to have occurred. This part of the chain will be discarded before sampling from what is then assumed to be the posterior distribution desired.

threadsDEPRECATED int, optional, default 1

The number of threads for parallelization. Default is 1, i.e. no parallelization With the change to the new emcee version 3, threads is deprecated. Use the pool keyword argument instead. This will no longer have any effect.

print_resultsbool, optional, default True

Boolean flag setting whether the results of the MCMC run should be printed to standard output. Default: True

plotbool, optional, default False

Boolean flag setting whether summary plots of the MCMC chains should be produced. Default: False

namestrstr, optional, default test

Optional string for output file names for the plotting.

poolbool, default False

If True, use pooling to parallelize the operation.

Returns:
resclass:SamplingResults object

An object of class SamplingResults summarizing the results of the MCMC run.

simulate_lrts(s_all, lpost1, t1, lpost2, t2, max_post=True, seed=None)[source]

Simulate likelihood ratios. For details, see definitions in the subclasses that implement this task.

class stingray.modeling.PSDParEst(ps, fitmethod='BFGS', max_post=True)[source]

Parameter estimation for parametric modelling of power spectra.

This class contains functionality that allows parameter estimation and related tasks that involve fitting a parametric model to an (averaged) power spectrum.

Parameters:
psclass:stingray.Powerspectrum or class:stingray.AveragedPowerspectrum object

The power spectrum to be modelled

fitmethodstr, optional, default BFGS

A string allowed by scipy.optimize.minimize as a valid fitting method

max_postbool, optional, default True

If True, do a Maximum-A-Posteriori (MAP) fit, i.e. fit with priors, otherwise do a Maximum Likelihood fit instead

calibrate_highest_outlier(lpost, t0, sample=None, max_post=False, nsim=1000, niter=200, nwalkers=500, burnin=200, namestr='test', seed=None)[source]

Calibrate the highest outlier in a data set using MCMC-simulated power spectra.

In short, the procedure does a MAP fit to the data, computes the statistic

\[\max{(T_R = 2(\mathrm{data}/\mathrm{model}))}\]

and then does an MCMC run using the data and the model, or generates parameter samples from the likelihood distribution using the derived covariance in a Maximum Likelihood fit. From the (posterior) samples, it generates fake power spectra. Each fake spectrum is fit in the same way as the data, and the highest data/model outlier extracted as for the data. The observed value of \(T_R\) can then be directly compared to the simulated distribution of \(T_R\) values in order to derive a p-value of the null hypothesis that the observed \(T_R\) is compatible with being generated by noise.

Parameters:
lpoststingray.modeling.PSDPosterior object

An instance of class stingray.modeling.PSDPosterior that defines the function to be minimized (either in loglikelihood or logposterior)

t0{list | numpy.ndarray}

List/array with set of initial parameters

sampleSamplingResults instance, optional, default None

If a sampler has already been run, the SamplingResults instance can be fed into this method here, otherwise this method will run a sampler automatically

max_post: bool, optional, default ``False``

If True, do MAP fits on the power spectrum to find the highest data/model outlier Otherwise, do a Maximum Likelihood fit. If True, the simulated power spectra will be generated from an MCMC run, otherwise the method will employ the approximated covariance matrix for the parameters derived from the likelihood surface to generate samples from that likelihood function.

nsimint, optional, default 1000

Number of fake power spectra to simulate from the posterior sample. Note that this number sets the resolution of the resulting p-value. For nsim=1000, the highest resolution that can be achieved is \(10^{-3}\).

niterint, optional, default 200

If sample is None, this variable will be used to set the number of steps in the MCMC procedure after burn-in.

nwalkersint, optional, default 500

If sample is None, this variable will be used to set the number of MCMC chains run in parallel in the sampler.

burninint, optional, default 200

If sample is None, this variable will be used to set the number of burn-in steps to be discarded in the initial phase of the MCMC run

namestrstr, optional, default test

A string to be used for storing MCMC output and plots to disk

seedint, optional, default None

An optional number to seed the random number generator with, for reproducibility of the results obtained with this method.

Returns:
pvalfloat

The p-value that the highest data/model outlier is produced by random noise, calibrated using simulated power spectra from an MCMC run.

References

For more details on the procedure employed here, see

calibrate_lrt(lpost1, t1, lpost2, t2, sample=None, neg=True, max_post=False, nsim=1000, niter=200, nwalkers=500, burnin=200, namestr='test', seed=None)

Calibrate the outcome of a Likelihood Ratio Test via MCMC.

In order to compare models via likelihood ratio test, one generally aims to compute a p-value for the null hypothesis (generally the simpler model). There are two special cases where the theoretical distribution used to compute that p-value analytically given the observed likelihood ratio (a chi-square distribution) is not applicable:

  • the models are not nested (i.e. Model 1 is not a special, simpler case of Model 2),

  • the parameter values fixed in Model 2 to retrieve Model 1 are at the edges of parameter space (e.g. if one must set, say, an amplitude to zero in order to remove a component in the more complex model, and negative amplitudes are excluded a priori)

In these cases, the observed likelihood ratio must be calibrated via simulations of the simpler model (Model 1), using MCMC to take into account the uncertainty in the parameters. This function does exactly that: it computes the likelihood ratio for the observed data, and produces simulations to calibrate the likelihood ratio and compute a p-value for observing the data under the assumption that Model 1 istrue.

If max_post=True, the code will use MCMC to sample the posterior of the parameters and simulate fake data from there.

If max_post=False, the code will use the covariance matrix derived from the fit to simulate data sets for comparison.

Parameters:
lpost1object of a subclass of Posterior

The Posterior object for model 1

t1iterable

The starting parameters for model 1

lpost2object of a subclass of Posterior

The Posterior object for model 2

t2iterable

The starting parameters for model 2

negbool, optional, default True

Boolean flag to decide whether to use the negative log-likelihood or log-posterior

max_post: bool, optional, default ``False``

If True, set the internal state to do the optimization with the log-likelihood rather than the log-posterior.

Returns:
pvaluefloat [0,1]

p-value ‘n stuff

compute_lrt(lpost1, t1, lpost2, t2, neg=True, max_post=False)

This function computes the Likelihood Ratio Test between two nested models.

Parameters:
lpost1object of a subclass of Posterior

The Posterior object for model 1

t1iterable

The starting parameters for model 1

lpost2object of a subclass of Posterior

The Posterior object for model 2

t2iterable

The starting parameters for model 2

negbool, optional, default True

Boolean flag to decide whether to use the negative log-likelihood or log-posterior

max_post: bool, optional, default ``False``

If True, set the internal state to do the optimization with the log-likelihood rather than the log-posterior.

Returns:
lrtfloat

The likelihood ratio for model 2 and model 1

res1OptimizationResults object

Contains the result of fitting lpost1

res2OptimizationResults object

Contains the results of fitting lpost2

fit(lpost, t0, neg=True, scipy_optimize_options=None)[source]

Do either a Maximum-A-Posteriori (MAP) or Maximum Likelihood (ML) fit to the power spectrum.

MAP fits include priors, ML fits do not.

Parameters:
lpoststingray.modeling.PSDPosterior object

An instance of class stingray.modeling.PSDPosterior that defines the function to be minimized (either in loglikelihood or logposterior)

t0{list | numpy.ndarray}

List/array with set of initial parameters

negbool, optional, default True

Boolean to be passed to lpost, setting whether to use the negative posterior or the negative log-likelihood.

scipy_optimize_optionsdict, optional, default None

A dictionary with options for scipy.optimize.minimize, directly passed on as keyword arguments.

Returns:
resOptimizationResults object

An object containing useful summaries of the fitting procedure. For details, see documentation of OptimizationResults.

plotfits(res1, res2=None, save_plot=False, namestr='test', log=False)[source]

Plotting method that allows to plot either one or two best-fit models with the data.

Plots a power spectrum with the best-fit model, as well as the data/model residuals for each model.

Parameters:
res1OptimizationResults object

Output of a successful fitting procedure

res2OptimizationResults object, optional, default None

Optional output of a second successful fitting procedure, e.g. with a competing model

save_plotbool, optional, default False

If True, the resulting figure will be saved to a file

namestrstr, optional, default test

If save_plot is True, this string defines the path and file name for the output plot

logbool, optional, default False

If True, plot the axes logarithmically.

sample(lpost, t0, cov=None, nwalkers=500, niter=100, burnin=100, threads=1, print_results=True, plot=False, namestr='test')[source]

Sample the posterior distribution defined in lpost using MCMC. Here we use the emcee package, but other implementations could in principle be used.

Parameters:
lpostinstance of a Posterior subclass

and instance of class Posterior or one of its subclasses that defines the function to be minimized (either in loglikelihood or logposterior)

t0iterable

list or array containing the starting parameters. Its length must match lpost.model.npar.

nwalkersint, optional, default 500

The number of walkers (chains) to use during the MCMC procedure. The more walkers are used, the slower the estimation will be, but the better the final distribution is likely to be.

niterint, optional, default 100

The number of iterations to run the MCMC chains for. The larger this number, the longer the estimation will take, but the higher the chance that the walkers have actually converged on the true posterior distribution.

burninint, optional, default 100

The number of iterations to run the walkers before convergence is assumed to have occurred. This part of the chain will be discarded before sampling from what is then assumed to be the posterior distribution desired.

threadsint, optional, default 1

The number of threads for parallelization. Default is 1, i.e. no parallelization

print_resultsbool, optional, default True

Boolean flag setting whether the results of the MCMC run should be printed to standard output

plotbool, optional, default False

Boolean flag setting whether summary plots of the MCMC chains should be produced

namestrstr, optional, default test

Optional string for output file names for the plotting.

Returns:
resSamplingResults object

An object containing useful summaries of the sampling procedure. For details see documentation of SamplingResults.

simulate_highest_outlier(s_all, lpost, t0, max_post=True, seed=None)[source]

Simulate \(n\) power spectra from a model and then find the highest data/model outlier in each.

The data/model outlier is defined as

\[\max{(T_R = 2(\mathrm{data}/\mathrm{model}))} .\]
Parameters:
s_allnumpy.ndarray

A list of parameter values derived either from an approximation of the likelihood surface, or from an MCMC run. Has dimensions (n, ndim), where n is the number of simulated power spectra to generate, and ndim the number of model parameters.

lpostinstance of a Posterior subclass

an instance of class Posterior or one of its subclasses that defines the function to be minimized (either in loglikelihood or logposterior)

t0iterable

list or array containing the starting parameters. Its length must match lpost.model.npar.

max_post: bool, optional, default ``False``

If True, do MAP fits on the power spectrum to find the highest data/model outlier Otherwise, do a Maximum Likelihood fit. If True, the simulated power spectra will be generated from an MCMC run, otherwise the method will employ the approximated covariance matrix for the parameters derived from the likelihood surface to generate samples from that likelihood function.

seedint, optional, default None

An optional number to seed the random number generator with, for reproducibility of the results obtained with this method.

Returns:
max_y_allnumpy.ndarray

An array of maximum outliers for each simulated power spectrum

simulate_lrts(s_all, lpost1, t1, lpost2, t2, seed=None)[source]

Simulate likelihood ratios for two given models based on MCMC samples for the simpler model (i.e. the null hypothesis).

Parameters:
s_allnumpy.ndarray of shape (nsamples, lpost1.npar)

An array with MCMC samples derived from the null hypothesis model in lpost1. Its second dimension must match the number of free parameters in lpost1.model.

lpost1LogLikelihood or Posterior subclass object

Object containing the null hypothesis model

t1iterable of length lpost1.npar

A starting guess for fitting the model in lpost1

lpost2LogLikelihood or Posterior subclass object

Object containing the alternative hypothesis model

t2iterable of length lpost2.npar

A starting guess for fitting the model in lpost2

max_postbool, optional, default True

If True, then lpost1 and lpost2 should be Posterior subclass objects; if False, then lpost1 and lpost2 should be LogLikelihood subclass objects

seedint, optional default None

A seed to initialize the numpy.random.RandomState object to be passed on to _generate_data. Useful for producing exactly reproducible results

Returns:
lrt_simnumpy.ndarray

An array with the simulated likelihood ratios for the simulated data


Auxiliary Classes

These are helper classes instantiated by stingray.modeling.ParameterEstimation and its subclasses to organize the results of model fitting and sampling in a more meaningful, easily accessible way.

class stingray.modeling.OptimizationResults(lpost, res, neg=True, log=None)[source]

Helper class that will contain the results of the regression. Less fiddly than a dictionary.

Parameters:
lpost: instance of :class:`Posterior` or one of its subclasses

The object containing the function that is being optimized in the regression

res: instance of ``scipy.OptimizeResult``

The object containing the results from a optimization run

negbool, optional, default True

A flag that sets whether the log-likelihood or negative log-likelihood is being used

loga logging.getLogger() object, default None

You can pass a pre-defined object for logging, else a new logger will be instantiated

References

Attributes:
resultfloat

The result of the optimization, i.e. the function value at the minimum that the optimizer found

p_optiterable

The list of parameters at the minimum found by the optimizer

modelastropy.models.Model instance

The parametric model fit to the data

covnumpy.ndarray

The covariance matrix for the parameters, has shape (len(p_opt), len(p_opt))

errnumpy.ndarray

The standard deviation of the parameters, derived from the diagonal of cov. Has the same shape as p_opt

mfitnumpy.ndarray

The values of the model for all x

deviancefloat

The deviance, calculated as -2*log(likelihood)

aicfloat

The Akaike Information Criterion, derived from the log(likelihood) and often used in model comparison between non-nested models; For more details, see [7]

bicfloat

The Bayesian Information Criterion, derived from the log(likelihood) and often used in model comparison between non-nested models; For more details, see [8]

meritfloat

sum of squared differences between data and model, normalized by the model values

dofint

The number of degrees of freedom in the problem, defined as the number of data points - the number of parameters

sexpint

2*(number of parameters)*(number of data points)

ssdfloat

sqrt(2*(sexp)), expected sum of data-model residuals

sobsfloat

sum of data-model residuals

_compute_covariance(lpost, res)[source]

Compute the covariance of the parameters using inverse of the Hessian, i.e. the second-order derivative of the log-likelihood. Also calculates an estimate of the standard deviation in the parameters, using the square root of the diagonal of the covariance matrix.

The Hessian is either estimated directly by the chosen method of fitting, or approximated using the statsmodel approx_hess function.

Parameters:
lpost: instance of :class:`Posterior` or one of its subclasses

The object containing the function that is being optimized in the regression

res: instance of ``scipy``’s ``OptimizeResult`` class

The object containing the results from a optimization run

_compute_criteria(lpost)[source]

Compute various information criteria useful for model comparison in non-nested models.

Currently implemented are the Akaike Information Criterion [9] and the Bayesian Information Criterion [10].

Parameters:
lpost: instance of :class:`Posterior` or one of its subclasses

The object containing the function that is being optimized in the regression

References

_compute_model(lpost)[source]

Compute the values of the best-fit model for all x.

Parameters:
lpost: instance of :class:`Posterior` or one of its subclasses

The object containing the function that is being optimized in the regression

_compute_statistics(lpost)[source]

Compute some useful fit statistics, like the degrees of freedom and the figure of merit.

Parameters:
lpost: instance of :class:`Posterior` or one of its subclasses

The object containing the function that is being optimized in the regression

print_summary(lpost)[source]

Print a useful summary of the fitting procedure to screen or a log file.

Parameters:
lpostinstance of Posterior or one of its subclasses

The object containing the function that is being optimized in the regression

class stingray.modeling.SamplingResults(sampler, ci_min=5, ci_max=95, log=None)[source]

Helper class that will contain the results of the sampling in a handy format.

Less fiddly than a dictionary.

Parameters:
sampler: ``emcee.EnsembleSampler`` object

The object containing the sampler that’s done all the work.

ci_min: float out of [0,100]

The lower bound percentile for printing credible intervals on the parameters

ci_max: float out of [0,100]

The upper bound percentile for printing credible intervals on the parameters

loga logging.getLogger() object, default None

You can pass a pre-defined object for logging, else a new logger will be instantiated

References

Attributes:
samplesnumpy.ndarray

An array of samples from the MCMC run, including all chains flattened into one long (nwalkers*niter, ndim) array

nwalkersint

The number of chains used in the MCMC procedure

niterint

The number of MCMC iterations in each chain

ndimint

The dimensionality of the problem, i.e. the number of parameters in the model

acceptancefloat

The mean acceptance ratio, calculated over all chains

Lfloat

The product of acceptance ratio and number of samples

acorfloat

The autocorrelation length for the chains; should be shorter than the chains themselves for independent sampling

rhatfloat

weighted average of between-sequence variance and within-sequence variance; Gelman-Rubin convergence statistic [11]

meannumpy.ndarray

An array of size ndim, with the posterior means of the parameters derived from the MCMC chains

stdnumpy.ndarray

An array of size ndim with the posterior standard deviations of the parameters derived from the MCMC chains

cinumpy.ndarray

An array of shape (ndim, 2) containing the lower and upper bounds of the credible interval (the Bayesian equivalent of the confidence interval) for each parameter using the bounds set by ci_min and ci_max

_check_convergence(sampler)[source]

Compute common statistics for convergence of the MCMC chains. While you can never be completely sure that your chains converged, these present reasonable heuristics to give an indication whether convergence is very far off or reasonably close.

Currently implemented are the autocorrelation time [12] and the Gelman-Rubin convergence criterion [13].

Parameters:
sampleran emcee.EnsembleSampler object

References

_compute_rhat(sampler)[source]

Compute Gelman-Rubin convergence criterion [14].

Parameters:
sampleran emcee.EnsembleSampler object

References

_infer(ci_min=5, ci_max=95)[source]

Infer the Posterior means, standard deviations and credible intervals (i.e. the Bayesian equivalent to confidence intervals) from the Posterior samples for each parameter.

Parameters:
ci_minfloat

Lower bound to the credible interval, given as percentage between 0 and 100

ci_maxfloat

Upper bound to the credible interval, given as percentage between 0 and 100

plot_results(nsamples=1000, fig=None, save_plot=False, filename='test.pdf')[source]

Plot some results in a triangle plot. If installed, will use [corner] for the plotting, if not, uses its own code to make a triangle plot.

By default, this method returns a matplotlib.Figure object, but if save_plot=True the plot can be saved to file automatically,

Parameters:
nsamplesint, default 1000

The maximum number of samples used for plotting.

figmatplotlib.Figure instance, default None

If created externally, you can pass a Figure instance to this method. If none is passed, the method will create one internally.

save_plotbool, default False

If True save the plot to file with a file name specified by the keyword filename. If False just return the Figure object

filenamestr

Name of the output file with the figure

References

print_results()[source]

Print results of the MCMC run on screen or to a log-file.


Convenience Functions

These functions are designed to help the user perform common tasks related to modelling and parameter estimation. In particular, the function stingray.modeling.set_logprior() is designed to help users set priors in their stingray.modeling.Posterior subclass objects.

stingray.modeling.set_logprior(lpost, priors)[source]

This function constructs the logprior method required to successfully use a Posterior object.

All instances of class Posterior and its subclasses require to implement a logprior methods. However, priors are strongly problem-dependent and therefore usually user-defined.

This function allows for setting the logprior method on any instance of class Posterior efficiently by allowing the user to pass a dictionary of priors and an instance of class Posterior.

Parameters:
lpostPosterior object

An instance of class Posterior or any of its subclasses

priorsdict

A dictionary containing the prior definitions. Keys are parameter names as defined by the astropy.models.FittableModel instance supplied to the model parameter in Posterior. Items are functions that take a parameter as input and return the log-prior probability of that parameter.

Returns:
logpriorfunction

The function definition for the prior

Examples

Make a light curve and power spectrum

>>> photon_arrivals = np.sort(np.random.uniform(0,1000, size=10000))
>>> lc = Lightcurve.make_lightcurve(photon_arrivals, dt=1.0)
>>> ps = Powerspectrum(lc, norm="frac")

Define the model

>>> pl = models.PowerLaw1D()
>>> pl.x_0.fixed = True

Instantiate the posterior:

>>> lpost = PSDPosterior(ps.freq, ps.power, pl, m=ps.m)

Define the priors:

>>> p_alpha = lambda alpha: ((-1. <= alpha) & (alpha <= 5.))
>>> p_amplitude = lambda amplitude: ((-10 <= np.log(amplitude)) &
...                                 ((np.log(amplitude) <= 10.0)))
>>> priors = {"alpha":p_alpha, "amplitude":p_amplitude}

Set the logprior method in the lpost object:

>>> lpost.logprior = set_logprior(lpost, priors)
stingray.modeling.scripts.fit_crossspectrum(cs, model, starting_pars=None, max_post=False, priors=None, fitmethod='L-BFGS-B')[source]

Fit a number of Lorentzians to a cross spectrum, possibly including white noise. Each Lorentzian has three parameters (amplitude, centroid position, full-width at half maximum), plus one extra parameter if the white noise level should be fit as well. Priors for each parameter can be included in case max_post = True, in which case the function will attempt a Maximum-A-Posteriori fit. Priors must be specified as a dictionary with one entry for each parameter. The parameter names are (amplitude_i, x_0_i, fwhm_i) for each i out of a total of N Lorentzians. The white noise level has a parameter amplitude_(N+1). For example, a model with two Lorentzians and a white noise level would have parameters: [amplitude_0, x_0_0, fwhm_0, amplitude_1, x_0_1, fwhm_1, amplitude_2].

Parameters:
csCrossspectrum

A Crossspectrum object with the data to be fit

model: astropy.modeling.models class instance

The parametric model supposed to represent the data. For details see the astropy.modeling documentation

starting_parsiterable, optional, default None

The list of starting guesses for the optimizer. If it is not provided, then default parameters are taken from model. See explanation above for ordering of parameters in this list.

max_postbool, optional, default False

If True, perform a Maximum-A-Posteriori fit of the data rather than a Maximum Likelihood fit. Note that this requires priors to be specified, otherwise this will cause an exception!

priors{dict | None}, optional, default None

Dictionary with priors for the MAP fit. This should be of the form {“parameter name”: probability distribution, …}

fitmethodstring, optional, default “L-BFGS-B”

Specifies an optimization algorithm to use. Supply any valid option for scipy.optimize.minimize.

Returns:
parestPSDParEst object

A PSDParEst object for further analysis

resOptimizationResults object

The OptimizationResults object storing useful results and quantities relating to the fit

stingray.modeling.scripts.fit_lorentzians(ps, nlor, starting_pars, fit_whitenoise=True, max_post=False, priors=None, fitmethod='L-BFGS-B')[source]

Fit a number of Lorentzians to a power spectrum, possibly including white noise. Each Lorentzian has three parameters (amplitude, centroid position, full-width at half maximum), plus one extra parameter if the white noise level should be fit as well. Priors for each parameter can be included in case max_post = True, in which case the function will attempt a Maximum-A-Posteriori fit. Priors must be specified as a dictionary with one entry for each parameter. The parameter names are (amplitude_i, x_0_i, fwhm_i) for each i out of a total of N Lorentzians. The white noise level has a parameter amplitude_(N+1). For example, a model with two Lorentzians and a white noise level would have parameters: [amplitude_0, x_0_0, fwhm_0, amplitude_1, x_0_1, fwhm_1, amplitude_2].

Parameters:
psPowerspectrum

A Powerspectrum object with the data to be fit

nlorint

The number of Lorentzians to fit

starting_parsiterable

The list of starting guesses for the optimizer. If it is not provided, then default parameters are taken from model. See explanation above for ordering of parameters in this list.

fit_whitenoisebool, optional, default True

If True, the code will attempt to fit a white noise level along with the Lorentzians. Be sure to include a starting parameter for the optimizer in starting_pars!

max_postbool, optional, default False

If True, perform a Maximum-A-Posteriori fit of the data rather than a Maximum Likelihood fit. Note that this requires priors to be specified, otherwise this will cause an exception!

priors{dict | None}, optional, default None

Dictionary with priors for the MAP fit. This should be of the form {“parameter name”: probability distribution, …}

fitmethodstring, optional, default “L-BFGS-B”

Specifies an optimization algorithm to use. Supply any valid option for scipy.optimize.minimize.

Returns:
parestPSDParEst object

A PSDParEst object for further analysis

resOptimizationResults object

The OptimizationResults object storing useful results and quantities relating to the fit

Examples

We start by making an example power spectrum with three Lorentzians >>> np.random.seed(400) >>> nlor = 3

>>> x_0_0 = 0.5
>>> x_0_1 = 2.0
>>> x_0_2 = 7.5
>>> amplitude_0 = 150.0
>>> amplitude_1 = 50.0
>>> amplitude_2 = 15.0
>>> fwhm_0 = 0.1
>>> fwhm_1 = 1.0
>>> fwhm_2 = 0.5

We will also include a white noise level: >>> whitenoise = 2.0

>>> model = (models.Lorentz1D(amplitude_0, x_0_0, fwhm_0) +
...          models.Lorentz1D(amplitude_1, x_0_1, fwhm_1) +
...          models.Lorentz1D(amplitude_2, x_0_2, fwhm_2) +
...          models.Const1D(whitenoise))
>>> freq = np.linspace(0.01, 10.0, 1000)
>>> p = model(freq)
>>> noise = np.random.exponential(size=len(freq))
>>> power = p*noise
>>> ps = Powerspectrum()
>>> ps.freq = freq
>>> ps.power = power
>>> ps.df = ps.freq[1] - ps.freq[0]
>>> ps.m = 1

Now we have to guess starting parameters. For each Lorentzian, we have amplitude, centroid position and fwhm, and this pattern repeats for each Lorentzian in the fit. The white noise level is the last parameter. >>> t0 = [150, 0.4, 0.2, 50, 2.3, 0.6, 20, 8.0, 0.4, 2.1]

We’re ready for doing the fit: >>> parest, res = fit_lorentzians(ps, nlor, t0)

res contains a whole array of useful information about the fit, for example the parameters at the optimum: >>> p_opt = res.p_opt

stingray.modeling.scripts.fit_powerspectrum(ps, model, starting_pars=None, max_post=False, priors=None, fitmethod='L-BFGS-B')[source]

Fit a number of Lorentzians to a power spectrum, possibly including white noise. Each Lorentzian has three parameters (amplitude, centroid position, full-width at half maximum), plus one extra parameter if the white noise level should be fit as well. Priors for each parameter can be included in case max_post = True, in which case the function will attempt a Maximum-A-Posteriori fit. Priors must be specified as a dictionary with one entry for each parameter. The parameter names are (amplitude_i, x_0_i, fwhm_i) for each i out of a total of N Lorentzians. The white noise level has a parameter amplitude_(N+1). For example, a model with two Lorentzians and a white noise level would have parameters: [amplitude_0, x_0_0, fwhm_0, amplitude_1, x_0_1, fwhm_1, amplitude_2].

Parameters:
psPowerspectrum

A Powerspectrum object with the data to be fit

model: astropy.modeling.models class instance

The parametric model supposed to represent the data. For details see the astropy.modeling documentation

starting_parsiterable, optional, default None

The list of starting guesses for the optimizer. If it is not provided, then default parameters are taken from model. See explanation above for ordering of parameters in this list.

fit_whitenoisebool, optional, default True

If True, the code will attempt to fit a white noise level along with the Lorentzians. Be sure to include a starting parameter for the optimizer in starting_pars!

max_postbool, optional, default False

If True, perform a Maximum-A-Posteriori fit of the data rather than a Maximum Likelihood fit. Note that this requires priors to be specified, otherwise this will cause an exception!

priors{dict | None}, optional, default None

Dictionary with priors for the MAP fit. This should be of the form {“parameter name”: probability distribution, …}

fitmethodstring, optional, default “L-BFGS-B”

Specifies an optimization algorithm to use. Supply any valid option for scipy.optimize.minimize.

Returns:
parestPSDParEst object

A PSDParEst object for further analysis

resOptimizationResults object

The OptimizationResults object storing useful results and quantities relating to the fit

Examples

We start by making an example power spectrum with three Lorentzians >>> m = 1 >>> nfreq = 100000 >>> freq = np.linspace(1, 1000, nfreq)

>>> np.random.seed(100)  # set the seed for the random number generator
>>> noise = np.random.exponential(size=nfreq)
>>> model = models.PowerLaw1D() + models.Const1D()
>>> model.x_0_0.fixed = True
>>> alpha_0 = 2.0
>>> amplitude_0 = 100.0
>>> amplitude_1 = 2.0
>>> model.alpha_0 = alpha_0
>>> model.amplitude_0 = amplitude_0
>>> model.amplitude_1 = amplitude_1
>>> p = model(freq)
>>> power = noise * p
>>> ps = Powerspectrum()
>>> ps.freq = freq
>>> ps.power = power
>>> ps.m = m
>>> ps.df = freq[1] - freq[0]
>>> ps.norm = "leahy"

Now we have to guess starting parameters. For each Lorentzian, we have amplitude, centroid position and fwhm, and this pattern repeats for each Lorentzian in the fit. The white noise level is the last parameter. >>> t0 = [80, 1.5, 2.5]

Let’s also make a model to test: >>> model_to_test = models.PowerLaw1D() + models.Const1D() >>> model_to_test.amplitude_1.fixed = True

We’re ready for doing the fit: >>> parest, res = fit_powerspectrum(ps, model_to_test, t0)

res contains a whole array of useful information about the fit, for example the parameters at the optimum: >>> p_opt = res.p_opt


Pulsar

This submodule broadly defines functionality related to (X-ray) pulsar data analysis, especially periodicity searches.

class stingray.pulse.SincSquareModel(amplitude=1.0, mean=0.0, width=1.0, **kwargs)[source]
stingray.pulse.ef_profile_stat(profile, err=None)[source]

Calculate the epoch folding statistics ‘a la Leahy et al. (1983).

Parameters:
profilearray

The pulse profile

Returns:
statfloat

The epoch folding statistics

Other Parameters:
errfloat or array

The uncertainties on the pulse profile

Performs epoch folding at trial frequencies in photon data.

If no exposure correction is needed and numba is installed, it uses a fast algorithm to perform the folding. Otherwise, it runs a much slower algorithm, which however yields a more precise result. The search can be done in segments and the results averaged. Use segment_size to control this

Parameters:
timesarray-like

the event arrival times

frequenciesarray-like

the trial values for the frequencies

Returns:
(fgrid, stats) or (fgrid, fdgrid, stats), as follows:
fgridarray-like

frequency grid of the epoch folding periodogram

fdgridarray-like

frequency derivative grid. Only returned if fdots is an array.

statsarray-like

the epoch folding statistics corresponding to each frequency bin.

Other Parameters:
nbinint

the number of bins of the folded profiles

segment_sizefloat

the length of the segments to be averaged in the periodogram

fdotsarray-like

trial values of the first frequency derivative (optional)

expocorrbool

correct for the exposure (Use it if the period is comparable to the length of the good time intervals). If True, GTIs have to be specified via the gti keyword

gti[[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good time intervals

weightsarray-like

weight for each time. This might be, for example, the number of counts if the times array contains the time bins of a light curve

stingray.pulse.fftfit(prof, template=None, quick=False, sigma=None, use_bootstrap=False, **fftfit_kwargs)[source]

Align a template to a pulse profile.

Parameters:
profarray

The pulse profile

templatearray, default None

The template of the pulse used to perform the TOA calculation. If None, a simple sinusoid is used

Returns:
mean_amp, std_ampfloats

Mean and standard deviation of the amplitude

mean_phase, std_phasefloats

Mean and standard deviation of the phase

Other Parameters:
sigmaarray

error on profile bins (currently has no effect)

use_bootstrapbool

Calculate errors using a bootstrap method, with fftfit_error

**fftfit_kwargsadditional arguments for fftfit_error
stingray.pulse.fit_gaussian(x, y, amplitude=1.5, mean=0.0, stddev=2.0, tied={}, fixed={}, bounds={})[source]

Fit a gaussian function to x,y values.

Parameters:
xarray-like
yarray-like
Returns:
gfunction

The best-fit function, accepting x as input and returning the best-fit model as output

Other Parameters:
amplitudefloat

The initial value for the amplitude

meanfloat

The initial value for the mean of the gaussian function

stddevfloat

The initial value for the standard deviation of the gaussian function

tieddict
fixeddict
boundsdict

Parameters to be passed to the [astropy models]_

stingray.pulse.fit_sinc(x, y, amp=1.5, mean=0.0, width=1.0, tied={}, fixed={}, bounds={}, obs_length=None)[source]

Fit a sinc function to x,y values.

Parameters:
xarray-like
yarray-like
Returns:
sincfitfunction

The best-fit function, accepting x as input and returning the best-fit model as output

Other Parameters:
ampfloat

The initial value for the amplitude

meanfloat

The initial value for the mean of the sinc

obs_lengthfloat

The length of the observation. Default None. If it’s defined, it fixes width to 1/(pi*obs_length), as expected from epoch folding periodograms

widthfloat

The initial value for the width of the sinc. Only valid if obs_length is 0

tieddict
fixeddict
boundsdict

Parameters to be passed to the [astropy models]_

References

stingray.pulse.fold_events(times, *frequency_derivatives, **opts)[source]

Epoch folding with exposure correction.

By default, the keyword times accepts a list of unbinned photon arrival times. If the input data is a (binned) light curve, then times will contain the time stamps of the observation, and weights should be set to the corresponding fluxes or counts.

Parameters:
timesarray of floats

Photon arrival times, or, if weights is set, time stamps of a light curve.

f, fdot, fddot…float

The frequency and any number of derivatives.

Returns:
phase_binsarray of floats

The phases corresponding to the pulse profile

profilearray of floats

The pulse profile

profile_errarray of floats

The uncertainties on the pulse profile

Other Parameters:
nbinint, optional, default 16

The number of bins in the pulse profile

weightsfloat or array of floats, optional

The weights of the data. It can either be specified as a single value for all points, or an array with the same length as time

gti[[gti0_0, gti0_1], [gti1_0, gti1_1], …], optional

Good time intervals

ref_timefloat, optional, default 0

Reference time for the timing solution

expocorrbool, default False

Correct each bin for exposure (use when the period of the pulsar is comparable to that of GTIs)

modestr, [“ef”, “pdm”], default “ef”

Whether to calculate the epoch folding or phase dispersion minimization folded profile. For “ef”, it calculates the (weighted) sum of the data points in each phase bin, for “pdm”, the variance in each phase bin

stingray.pulse.get_TOA(prof, period, tstart, template=None, additional_phase=0, quick=False, debug=False, use_bootstrap=False, **fftfit_kwargs)[source]

Calculate the Time-Of-Arrival of a pulse.

Parameters:
profarray

The pulse profile

templatearray, default None

The template of the pulse used to perform the TOA calculation, if any. Otherwise use the default of fftfit

tstartfloat

The time at the start of the pulse profile

Returns:
toa, toastdfloats

Mean and standard deviation of the TOA

Other Parameters:
nstepint, optional, default 100

Number of steps for the bootstrap method

stingray.pulse.get_orbital_correction_from_ephemeris_file(mjdstart, mjdstop, parfile, ntimes=1000, ephem='DE405', return_pint_model=False)[source]

Get a correction for orbital motion from pulsar parameter file.

Parameters:
mjdstart, mjdstopfloat

Start and end of the time interval where we want the orbital solution

parfilestr

Any parameter file understood by PINT (Tempo or Tempo2 format)

Returns:
correction_secfunction

Function that accepts in input an array of times in seconds and a floating-point MJDref value, and returns the deorbited times

correction_mjdfunction

Function that accepts times in MJDs and returns the deorbited times.

Other Parameters:
ntimesint

Number of time intervals to use for interpolation. Default 1000

stingray.pulse.htest(data, nmax=20, datatype='binned', err=None)[source]

htest-test statistic, a` la De Jager+89, A&A, 221, 180D, eq. 2.

If datatype is “binned” or “gauss”, uses the formulation from Bachetti+2021, ApJ, arxiv:2012.11397

Parameters:
dataarray of floats

Phase values or binned flux values

nmaxint, default 20

Maximum of harmonics for Z^2_n

Returns:
Mint

The best number of harmonics that describe the signal.

htestfloat

The htest statistics of the events.

Other Parameters:
datatypestr

The datatype of data: “events” if phase values between 0 and 1, “binned” if folded pulse profile from photons, “gauss” if folded pulse profile with normally-distributed fluxes

errfloat

The uncertainty on the pulse profile fluxes (required for datatype=”gauss”, ignored otherwise)

stingray.pulse.p_to_f(*period_derivatives)[source]

Convert periods into frequencies, and vice versa.

For now, limited to third derivative. Raises when a fourth derivative is passed.

Parameters:
p, pdot, pddot, …floats

period derivatives, starting from zeroth and in increasing order

Examples

>>> assert p_to_f() == []
>>> assert np.allclose(p_to_f(1), [1])
>>> assert np.allclose(p_to_f(1, 2), [1, -2])
>>> assert np.allclose(p_to_f(1, 2, 3), [1, -2, 5])
>>> assert np.allclose(p_to_f(1, 2, 3, 4), [1, -2, 5, -16])
stingray.pulse.pdm_profile_stat(profile, sample_var, nsample)[source]

Calculate the phase dispersion minimization statistic following Stellingwerf (1978)

Parameters:
profilearray

The PDM pulse profile (variance as a function of phase)

sample_varfloat

The total population variance of the sample

nsampleint

The number of time bins in the initial time series.

Returns:
statfloat

The epoch folding statistics

Performs folding at trial frequencies in time series data (i.e.~a light curve of flux or photon counts) and computes the Phase Dispersion Minimization statistic.

If no exposure correction is needed and numba is installed, it uses a fast algorithm to perform the folding. Otherwise, it runs a much slower algorithm, which however yields a more precise result. The search can be done in segments and the results averaged. Use segment_size to control this

Parameters:
timesarray-like

the time stamps of the time series

fluxarray-like

the flux or photon count values of the time series

frequenciesarray-like

the trial values for the frequencies

Returns:
(fgrid, stats) or (fgrid, fdgrid, stats), as follows:
fgridarray-like

frequency grid of the epoch folding periodogram

fdgridarray-like

frequency derivative grid. Only returned if fdots is an array.

statsarray-like

the epoch folding statistics corresponding to each frequency bin.

Other Parameters:
nbinint

the number of bins of the folded profiles

segment_sizefloat

the length of the segments to be averaged in the periodogram

fdotsarray-like

trial values of the first frequency derivative (optional)

expocorrbool

correct for the exposure (Use it if the period is comparable to the length of the good time intervals). If True, GTIs have to be specified via the gti keyword

gti[[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good time intervals

stingray.pulse.phase_exposure(start_time, stop_time, period, nbin=16, gti=None)[source]

Calculate the exposure on each phase of a pulse profile.

Parameters:
start_time, stop_timefloat

Starting and stopping time (or phase if period==1)

periodfloat

The pulse period (if 1, equivalent to phases)

Returns:
expoarray of floats

The normalized exposure of each bin in the pulse profile (1 is the highest exposure, 0 the lowest)

Other Parameters:
nbinint, optional, default 16

The number of bins in the profile

gti[[gti00, gti01], [gti10, gti11], …], optional, default None

Good Time Intervals

stingray.pulse.phaseogram(times, f, nph=128, nt=32, ph0=0, mjdref=None, fdot=0, fddot=0, pepoch=None, plot=False, phaseogram_ax=None, weights=None, **plot_kwargs)[source]

Calculate and plot the phaseogram of a pulsar observation.

The phaseogram is a 2-D histogram where the x axis is the pulse phase and the y axis is the time. It shows how the pulse phase changes with time, and it is very useful to see if the pulse solution is correct and/or if there are additional frequency derivatives appearing in the data (due to spin up or down, or even orbital motion)

Parameters:
timesarray

Event arrival times

ffloat

Pulse frequency

Returns:
phaseogr2-D matrix

The phaseogram

phasesarray-like

The x axis of the phaseogram (the x bins of the histogram), corresponding to the pulse phase in each column

timesarray-like

The y axis of the phaseogram (the y bins of the histogram), corresponding to the time at each row

additional_infodict

Additional information, like the pulse profile and the axes to modify the plot (the latter, only if return_plot is True)

Other Parameters:
nphint

Number of phase bins

ntint

Number of time bins

ph0float

The starting phase of the pulse

mjdreffloat

MJD reference time. If given, the y axis of the plot will be in MJDs, otherwise it will be in seconds.

fdotfloat

First frequency derivative

fddotfloat

Second frequency derivative

pepochfloat

If the input pulse solution is referred to a given time, give it here. It has no effect (just a phase shift of the pulse) if fdot is zero. if mjdref is specified, pepoch MUST be in MJD

weightsarray

Weight for each time

plotbool

Return the axes in the additional_info, and don’t close the plot, so that the user can add information to it.

stingray.pulse.plot_phaseogram(phaseogram, phase_bins, time_bins, unit_str='s', ax=None, **plot_kwargs)[source]

Plot a phaseogram.

Parameters:
phaseogramNxM array

The phaseogram to be plotted

phase_binsarray of M + 1 elements

The bins on the x-axis

time_binsarray of N + 1 elements

The bins on the y-axis

Returns:
axmatplotlib.pyplot.axis instance

Axis where the phaseogram was plotted.

Other Parameters:
unit_strstr

String indicating the time unit (e.g. ‘s’, ‘MJD’, etc)

axmatplotlib.pyplot.axis instance

Axis to plot to. If None, create a new one.

plot_kwargsdict

Additional arguments to be passed to pcolormesh

stingray.pulse.plot_profile(phase, profile, err=None, ax=None)[source]

Plot a pulse profile showing some stats.

If err is None, the profile is assumed in counts and the Poisson confidence level is plotted. Otherwise, err is shown as error bars

Parameters:
phasearray-like

The bins on the x-axis

profilearray-like

The pulsed profile

Returns:
axmatplotlib.pyplot.axis instance

Axis where the profile was plotted.

Other Parameters:
axmatplotlib.pyplot.axis instance

Axis to plot to. If None, create a new one.

stingray.pulse.pulse_phase(times, *frequency_derivatives, **opts)[source]

Calculate pulse phase from the frequency and its derivatives.

Parameters:
timesarray of floats

The times at which the phase is calculated

*frequency_derivatives: floats

List of derivatives in increasing order, starting from zero.

Returns:
phasesarray of floats

The absolute pulse phase

Other Parameters:
ph0float

The starting phase

to_1bool, default True

Only return the fractional part of the phase, normalized from 0 to 1

stingray.pulse.search_best_peaks(x, stat, threshold)[source]

Search peaks above threshold in an epoch folding periodogram.

If more values of stat are above threshold and are contiguous, only the largest one is returned (see Examples).

Parameters:
xarray-like

The x axis of the periodogram (frequencies, periods, …)

statarray-like

The y axis. It must have the same shape as x

thresholdfloat

The threshold value over which we look for peaks in the stat array

Returns:
best_xarray-like

the array containing the x position of the peaks above threshold. If no peaks are above threshold, an empty list is returned. The array is sorted by inverse value of stat

best_statarray-like

for each best_x, give the corresponding stat value. Empty if no peaks above threshold.

Examples

>>> # Test multiple peaks
>>> x = np.arange(10)
>>> stat = [0, 0, 0.5, 0, 0, 1, 1, 2, 1, 0]
>>> best_x, best_stat = search_best_peaks(x, stat, 0.5)
>>> len(best_x)
2
>>> assert np.isclose(best_x[0], 7.0)
>>> assert np.isclose(best_x[1], 2.0)
>>> stat = [0, 0, 2.5, 0, 0, 1, 1, 2, 1, 0]
>>> best_x, best_stat = search_best_peaks(x, stat, 0.5)
>>> assert np.isclose(best_x[0], 2.0)
>>> # Test no peak above threshold
>>> x = np.arange(10)
>>> stat = [0, 0, 0.4, 0, 0, 0, 0, 0, 0, 0]
>>> best_x, best_stat = search_best_peaks(x, stat, 0.5)
>>> best_x
[]
>>> best_stat
[]
stingray.pulse.sinc_square_deriv(x, amplitude=1.0, mean=0.0, width=1.0)[source]

Calculate partial derivatives of sinc-squared.

Parameters:
x: array-like
Returns:
d_amplitudearray-like

partial derivative of sinc-squared function with respect to the amplitude

d_meanarray-like

partial derivative of sinc-squared function with respect to the mean

d_widtharray-like

partial derivative of sinc-squared function with respect to the width

Other Parameters:
amplitudefloat

the value for x=mean

meanfloat

mean of the sinc function

widthfloat

width of the sinc function

Examples

>>> assert np.allclose(sinc_square_deriv(0, amplitude=2.), [1., 0., 0.])
stingray.pulse.sinc_square_model(x, amplitude=1.0, mean=0.0, width=1.0)[source]

Calculate a sinc-squared function.

(sin(x)/x)**2

Parameters:
x: array-like
Returns:
sqvaluesarray-like

Return square of sinc function

Other Parameters:
amplitudefloat

the value for x=mean

meanfloat

mean of the sinc function

widthfloat

width of the sinc function

Examples

>>> assert np.isclose(sinc_square_model(0, amplitude=2.), 2.0)
stingray.pulse.test(**kwargs)

Run the tests for the package.

This method builds arguments for and then calls pytest.main.

Parameters:
packagestr, optional

The name of a specific package to test, e.g. ‘io.fits’ or ‘utils’. Accepts comma separated string to specify multiple packages. If nothing is specified all default tests are run.

argsstr, optional

Additional arguments to be passed to pytest.main in the args keyword argument.

docs_pathstr, optional

The path to the documentation .rst files.

parallelint or ‘auto’, optional

When provided, run the tests in parallel on the specified number of CPUs. If parallel is 'auto', it will use the all the cores on the machine. Requires the pytest-xdist plugin.

pastebin(‘failed’, ‘all’, None), optional

Convenience option for turning on pytest pastebin output. Set to ‘failed’ to upload info for failed tests, or ‘all’ to upload info for all tests.

pdbbool, optional

Turn on PDB post-mortem analysis for failing tests. Same as specifying --pdb in args.

pep8bool, optional

Turn on PEP8 checking via the pytest-pep8 plugin and disable normal tests. Same as specifying --pep8 -k pep8 in args.

pluginslist, optional

Plugins to be passed to pytest.main in the plugins keyword argument.

remote_data{‘none’, ‘astropy’, ‘any’}, optional

Controls whether to run tests marked with @pytest.mark.remote_data. This can be set to run no tests with remote data (none), only ones that use data from http://data.astropy.org (astropy), or all tests that use remote data (any). The default is none.

repeatint, optional

If set, specifies how many times each test should be run. This is useful for diagnosing sporadic failures.

skip_docsbool, optional

When True, skips running the doctests in the .rst files.

test_pathstr, optional

Specify location to test by path. May be a single file or directory. Must be specified absolutely or relative to the calling directory.

verbosebool, optional

Convenience option to turn on verbose output from pytest. Passing True is the same as specifying -v in args.

stingray.pulse.z_n(data, n, datatype='events', err=None, norm=None)[source]

Z^2_n statistics, a` la Buccheri+83, A&A, 128, 245, eq. 2.

If datatype is “binned” or “gauss”, uses the formulation from Bachetti+2021, ApJ, arxiv:2012.11397

Parameters:
dataarray of floats

Phase values or binned flux values

nint, default 2

Number of harmonics, including the fundamental

Returns:
z2_nfloat

The Z^2_n statistics of the events.

Other Parameters:
datatypestr

The data type: “events” if phase values between 0 and 1, “binned” if folded pulse profile from photons, “gauss” if folded pulse profile with normally-distributed fluxes

errfloat

The uncertainty on the pulse profile fluxes (required for datatype=”gauss”, ignored otherwise)

normfloat

For backwards compatibility; if norm is not None, it is substituted to data, and data is ignored. This raises a DeprecationWarning

stingray.pulse.z_n_binned_events(profile, n)[source]

Z^2_n statistic for pulse profiles from binned events

See Bachetti+2021, arXiv:2012.11397

Parameters:
profilearray of floats

The folded pulse profile (containing the number of photons falling in each pulse bin)

nint

Number of harmonics, including the fundamental

Returns:
z2_nfloat

The value of the statistic

stingray.pulse.z_n_binned_events_all(profile, nmax=20)[source]

Z^2_n statistic for multiple harmonics and binned events

See Bachetti+2021, arXiv:2012.11397

Parameters:
profilearray of floats

The folded pulse profile (containing the number of photons falling in each pulse bin)

nint

Number of harmonics, including the fundamental

Returns:
kslist of ints

Harmonic numbers, from 1 to nmax (included)

z2_nfloat

The value of the statistic for all ks

stingray.pulse.z_n_events(phase, n)[source]

Z^2_n statistics, a` la Buccheri+83, A&A, 128, 245, eq. 2.

Parameters:
phasearray of floats

The phases of the events

nint, default 2

Number of harmonics, including the fundamental

Returns:
z2_nfloat

The Z^2_n statistic

stingray.pulse.z_n_events_all(phase, nmax=20)[source]

Z^2_n statistics, a` la Buccheri+83, A&A, 128, 245, eq. 2.

Parameters:
phasearray of floats

The phases of the events

nint, default 2

Number of harmonics, including the fundamental

Returns:
kslist of ints

Harmonic numbers, from 1 to nmax (included)

z2_nfloat

The Z^2_n statistic for all ks

stingray.pulse.z_n_gauss(profile, err, n)[source]

Z^2_n statistic for normally-distributed profiles

See Bachetti+2021, arXiv:2012.11397

Parameters:
profilearray of floats

The folded pulse profile

errfloat

The (assumed constant) uncertainty on the flux in each bin.

nint

Number of harmonics, including the fundamental

Returns:
z2_nfloat

The value of the statistic

stingray.pulse.z_n_gauss_all(profile, err, nmax=20)[source]

Z^2_n statistic for n harmonics and normally-distributed profiles

See Bachetti+2021, arXiv:2012.11397

Parameters:
profilearray of floats

The folded pulse profile

errfloat

The (assumed constant) uncertainty on the flux in each bin.

nmaxint

Maximum number of harmonics, including the fundamental

Returns:
kslist of ints

Harmonic numbers, from 1 to nmax (included)

z2_nlist of floats

The value of the statistic for all ks

Calculates the Z^2_n statistics at trial frequencies in photon data.

The “real” Z^2_n statistics is very slow. Therefore, in this function data are folded first, and then the statistics is calculated using the value of the profile as an additional normalization term. The two methods are mostly equivalent. However, the number of bins has to be chosen wisely: if the number of bins is too small, the search for high harmonics is ineffective. If no exposure correction is needed and numba is installed, it uses a fast algorithm to perform the folding. Otherwise, it runs a much slower algorithm, which however yields a more precise result. The search can be done in segments and the results averaged. Use segment_size to control this

Parameters:
timesarray-like

the event arrival times

frequenciesarray-like

the trial values for the frequencies

Returns:
(fgrid, stats) or (fgrid, fdgrid, stats), as follows:
fgridarray-like

frequency grid of the epoch folding periodogram

fdgridarray-like

frequency derivative grid. Only returned if fdots is an array.

statsarray-like

the Z^2_n statistics corresponding to each frequency bin.

Other Parameters:
nbinint

the number of bins of the folded profiles

segment_sizefloat

the length of the segments to be averaged in the periodogram

fdotsarray-like

trial values of the first frequency derivative (optional)

expocorrbool

correct for the exposure (Use it if the period is comparable to the length of the good time intervals.)

gti[[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good time intervals

weightsarray-like

weight for each time. This might be, for example, the number of counts if the times array contains the time bins of a light curve

Simulator

This submodule defines extensive functionality related to simulating spectral-timing data sets, including transfer and window functions, simulating light curves from power spectra for a range of stochastic processes.

class stingray.simulator.simulator.Simulator(dt, N, mean, rms, err=0.0, red_noise=1, random_state=None, tstart=0.0, poisson=False)[source]

Methods to simulate and visualize light curves.

TODO: Improve documentation

Parameters:
dtint, default 1

time resolution of simulated light curve

Nint, default 1024

bins count of simulated light curve

meanfloat, default 0

mean value of the simulated light curve

rmsfloat, default 1

fractional rms of the simulated light curve, actual rms is calculated by mean*rms

errfloat, default 0

the errorbars on the final light curve

red_noiseint, default 1

multiple of real length of light curve, by which to simulate, to avoid red noise leakage

random_stateint, default None

seed value for random processes

poissonbool, default False

return Poisson-distributed light curves.

count_channels()[source]

Return total number of energy channels.

delete_channel(channel)[source]

Delete an energy channel.

delete_channels(channels)[source]

Delete multiple energy channels.

get_all_channels()[source]

Get lightcurves belonging to all channels.

get_channel(channel)[source]

Get lightcurve belonging to the energy channel.

get_channels(channels)[source]

Get multiple light curves belonging to the energy channels.

powerspectrum(lc, seg_size=None)[source]

Make a powerspectrum of the simulated light curve.

Parameters:
lclightcurve.Lightcurve object OR

iterable of lightcurve.Lightcurve objects The light curve data to be Fourier-transformed.

Returns:
powernumpy.ndarray

The array of normalized squared absolute values of Fourier amplitudes

static read(filename, fmt='pickle')[source]

Reads transfer function from a ‘pickle’ file.

Parameters:
fmtstr

the format of the file to be retrieved - accepts ‘pickle’.

Returns:
dataclass instance

TransferFunction object

relativistic_ir(t1=3, t2=4, t3=10, p1=1, p2=1.4, rise=0.6, decay=0.1)[source]

Construct a realistic impulse response considering the relativistic effects.

Parameters:
t1int

primary peak time

t2int

secondary peak time

t3int

end time

p1float

value of primary peak

p2float

value of secondary peak

risefloat

slope of rising exponential from primary peak to secondary peak

decayfloat

slope of decaying exponential from secondary peak to end time

Returns:
hnumpy.ndarray

Constructed impulse response

simple_ir(start=0, width=1000, intensity=1)[source]

Construct a simple impulse response using start time, width and scaling intensity. To create a delta impulse response, set width to 1.

Parameters:
startint

start time of impulse response

widthint

width of impulse response

intensityfloat

scaling parameter to set the intensity of delayed emission corresponding to direct emission.

Returns:
hnumpy.ndarray

Constructed impulse response

simulate(*args)[source]

Simulate light curve generation using power spectrum or impulse response.

Parameters:
args

See examples below.

Returns:
lightCurveLightCurve object

Examples

  • x = simulate(beta):

    For generating a light curve using power law spectrum.

    Parameters:
    • beta : float Defines the shape of spectrum

  • x = simulate(s):
    For generating a light curve from user-provided spectrum.

    Note: In this case, the red_noise parameter is provided. You can generate a longer light curve by providing a higher frequency resolution on the input power spectrum.

    Parameters:
    • s : array-like power spectrum

  • x = simulate(model):

    For generating a light curve from pre-defined model

    Parameters:
    • model : astropy.modeling.Model the pre-defined model

  • x = simulate(‘model’, params):

    For generating a light curve from pre-defined model

    Parameters:
    • model : string the pre-defined model

    • params : list iterable or dict the parameters for the pre-defined model

  • x = simulate(s, h):

    For generating a light curve using impulse response.

    Parameters:
    • s : array-like Underlying variability signal

    • h : array-like Impulse response

  • x = simulate(s, h, ‘same’):

    For generating a light curve of same length as input signal, using impulse response.

    Parameters:
    • s : array-like Underlying variability signal

    • h : array-like Impulse response

    • mode : str mode can be ‘same’, ‘filtered, or ‘full’. ‘same’ indicates that the length of output light curve is same as that of input signal. ‘filtered’ means that length of output light curve is len(s) - lag_delay ‘full’ indicates that the length of output light curve is len(s) + len(h) -1

simulate_channel(channel, *args)[source]

Simulate a lightcurve and add it to corresponding energy channel.

Parameters:
channelstr

range of energy channel (e.g., 3.5-4.5)

*args

see description of simulate() for details

Returns:
lightCurveLightCurve object
write(filename, fmt='pickle')[source]

Writes a transfer function to ‘pickle’ file.

Parameters:
fmtstr

the format of the file to be saved - accepts ‘pickle’

Exceptions

Some basic Stingray-related errors and exceptions.

class stingray.exceptions.StingrayError[source]