Stingray API¶
Library of Time Series Methods For Astronomical Xray Data.
Data Classes¶
These classes define basic functionality related to common data types and typical methods
that apply to these data types, including basic read/write functionality. Currently
implemented are stingray.Lightcurve
and stingray.events.EventList
.
Lightcurve¶

class
stingray.
Lightcurve
(time, counts, err=None, input_counts=True, gti=None, err_dist='poisson', mjdref=0, dt=None, skip_checks=False, low_memory=False)[source]¶ Make a light curve object from an array of time stamps and an array of counts.
Parameters:  time (iterable) – A list or array of time stamps for a light curve
 counts (iterable, optional, default
None
) – A list or array of the counts in each bin corresponding to the bins defined in time (note: useinput_counts=False
to input the count range, i.e. counts/second, otherwise use counts/bin).  err (iterable, optional, default
None
) – A list or array of the uncertainties in each bin corresponding to the bins defined intime
(note: useinput_counts=False
to input the count rage, i.e. counts/second, otherwise use counts/bin). IfNone
, we assume the data is poisson distributed and calculate the error from the average of the lower and upper 1sigma confidence intervals for the Poissonian distribution with mean equal tocounts
.  input_counts (bool, optional, default True) – If True, the code assumes that the input data in
counts
is in units of counts/bin. If False, it assumes the data incounts
is in counts/second.  gti (2d float array, default
None
) –[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
Good Time Intervals. They are not applied to the data by default. They will be used by other methods to have an indication of the “safe” time intervals to use during analysis.  err_dist (str, optional, default
None
) – Statistical distribution used to calculate the uncertainties and other statistical values appropriately. Default makes no assumptions and keep errors equal to zero.  mjdref (float) – MJD reference (useful in most highenergy mission data)
 skip_checks (bool) – If True, the user specifies that data are already sorted and contain no infinite or nan points. Use at your own risk
 low_memory (bool) – If True, all the lazily evaluated attribute (e.g., countrate and countrate_err if input_counts is True) will _not_ be stored in memory, but calculated every time they are requested.

time
¶ The array of midpoints of time bins.
Type: numpy.ndarray

bin_lo
¶ The array of lower time stamp of time bins.
Type: numpy.ndarray

bin_hi
¶ The array of higher time stamp of time bins.
Type: numpy.ndarray

counts
¶ The counts per bin corresponding to the bins in
time
.Type: numpy.ndarray

counts_err
¶ The uncertainties corresponding to
counts
Type: numpy.ndarray

countrate
¶ The counts per second in each of the bins defined in
time
.Type: numpy.ndarray

countrate_err
¶ The uncertainties corresponding to
countrate
Type: numpy.ndarray

mjdref
¶ MJD reference date (
tstart
/ 86400 gives the date in MJD at the start of the observation)Type: float

gti
¶ [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
Good Time Intervals. They indicate the “safe” time intervals to be used during the analysis of the light curve.Type: 2d float array

err_dist
¶ Statistic of the Lightcurve, it is used to calculate the uncertainties and other statistical values appropriately. It propagates to Spectrum classes.
Type: string

__add__
(other)[source]¶ Add the counts of two light curves element by element, assuming the light curves have the same time array.
This magic method adds two
Lightcurve
objects having the same time array such that the corresponding counts arrays get summed up.GTIs are crossed, so that only common intervals are saved.
Examples
>>> time = [5, 10, 15] >>> count1 = [300, 100, 400] >>> count2 = [600, 1200, 800] >>> gti1 = [[0, 20]] >>> gti2 = [[0, 25]] >>> lc1 = Lightcurve(time, count1, gti=gti1, dt=5) >>> lc2 = Lightcurve(time, count2, gti=gti2, dt=5) >>> lc = lc1 + lc2 >>> lc.counts array([ 900, 1300, 1200])

__eq__
(other_lc)[source]¶ Compares two
Lightcurve
objects.Light curves are equal only if their counts as well as times at which those counts occur equal.
Examples
>>> time = [1, 2, 3] >>> count1 = [100, 200, 300] >>> count2 = [100, 200, 300] >>> lc1 = Lightcurve(time, count1, dt=1) >>> lc2 = Lightcurve(time, count2, dt=1) >>> lc1 == lc2 True

__getitem__
(index)[source]¶ Return the corresponding count value at the index or a new
Lightcurve
object upon slicing.This method adds functionality to retrieve the count value at a particular index. This also can be used for slicing and generating a new
Lightcurve
object. GTIs are recalculated based on the new light curve segmentIf the slice object is of kind
start:stop:step
, GTIs are also sliced, and rewritten aszip(time  self.dt /2, time + self.dt / 2)
Parameters: index (int or slice instance) – Index value of the time array or a slice object. Examples
>>> time = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> count = [11, 22, 33, 44, 55, 66, 77, 88, 99] >>> lc = Lightcurve(time, count, dt=1) >>> lc[2] 33 >>> lc[:2].counts array([11, 22])

__len__
()[source]¶ Return the number of time bins of a light curve.
This method implements overrides the
len
function for aLightcurve
object and returns the length of thetime
array (which should be equal to the length of thecounts
andcountrate
arrays).Examples
>>> time = [1, 2, 3] >>> count = [100, 200, 300] >>> lc = Lightcurve(time, count, dt=1) >>> len(lc) 3

__neg__
()[source]¶ Implement the behavior of negation of the light curve objects.
The negation operator

is supposed to invert the sign of the count values of a light curve object.Examples
>>> time = [1, 2, 3] >>> count1 = [100, 200, 300] >>> count2 = [200, 300, 400] >>> lc1 = Lightcurve(time, count1) >>> lc2 = Lightcurve(time, count2) >>> lc_new = lc1 + lc2 >>> lc_new.counts array([100, 100, 100])

__sub__
(other)[source]¶ Subtract the counts/flux of one light curve from the counts/flux of another light curve element by element, assuming the
time
arrays of the light curves match exactly.This magic method takes two
Lightcurve
objects having the sametime
array and subtracts thecounts
of oneLightcurve
with that of another, while also updatingcountrate
,counts_err
andcountrate_err
correctly.GTIs are crossed, so that only common intervals are saved.
Examples
>>> time = [10, 20, 30] >>> count1 = [600, 1200, 800] >>> count2 = [300, 100, 400] >>> gti1 = [[0, 35]] >>> gti2 = [[0, 40]] >>> lc1 = Lightcurve(time, count1, gti=gti1, dt=5) >>> lc2 = Lightcurve(time, count2, gti=gti2, dt=5) >>> lc = lc1  lc2 >>> lc.counts array([ 300, 1100, 400])

analyze_lc_chunks
(chunk_length, func, fraction_step=1, **kwargs)[source]¶ Analyze segments of the light curve with any function.
Parameters:  chunk_length (float) – Length in seconds of the light curve segments
 func (function) – Function accepting a
Lightcurve
object as single argument, plus possible additional keyword arguments, and returning a number or a tuple  e.g.,(result, error)
where bothresult
anderror
are numbers.
Other Parameters:  fraction_step (float) – If the step is not a full
chunk_length
but less (e.g. a moving window), this indicates the ratio between step step andchunk_length
(e.g. 0.5 means that the window shifts of halfchunk_length
)  kwargs (keyword arguments) – These additional keyword arguments, if present, they will be passed
to
func
Returns:  start_times (array) – Lower time boundaries of all time segments.
 stop_times (array) – upper time boundaries of all segments.
 result (array of N elements) – The result of
func
for each segment of the light curve
Examples
>>> import numpy as np >>> time = np.arange(0, 10, 0.1) >>> counts = np.zeros_like(time) + 10 >>> lc = Lightcurve(time, counts, dt=0.1) >>> # Define a function that calculates the mean >>> mean_func = lambda x: np.mean(x) >>> # Calculate the mean in segments of 5 seconds >>> start, stop, res = lc.analyze_lc_chunks(5, mean_func) >>> len(res) == 2 True >>> np.all(res == 10) True

apply_gtis
()[source]¶ Apply GTIs to a light curve. Filters the
time
,counts
,countrate
,counts_err
andcountrate_err
arrays for all bins that fall into Good Time Intervals and recalculates mean countrate and the number of bins.

baseline
(lam, p, niter=10, offset_correction=False)[source]¶ Calculate the baseline of the light curve, accounting for GTIs.
Parameters: Other Parameters: offset_correction (bool, default False) – by default, this method does not align to the running mean of the light curve, but it goes below the light curve. Setting align to True, an additional step is done to shift the baseline so that it is shifted to the middle of the light curve noise distribution.
Returns: baseline – An array with the baseline of the light curve
Return type: numpy.ndarray

change_mjdref
(new_mjdref)[source]¶ Change the MJD reference time (MJDREF) of the light curve.
Times will be now referred to this new MJDREF
Parameters: new_mjdref (float) – New MJDREF Returns: new_lc – The new LC shifted by MJDREF Return type: lightcurve.Lightcurve object

check_lightcurve
()[source]¶ Make various checks on the lightcurve.
It can be slow, use it if you are not sure about your input data.

estimate_chunk_length
(min_total_counts=100, min_time_bins=100)[source]¶ Estimate a reasonable segment length for chunkbychunk analysis.
Choose a reasonable length for time segments, given a minimum number of total counts in the segment, and a minimum number of time bins in the segment.
The user specifies a condition on the total counts in each segment and the minimum number of time bins.
Other Parameters:  min_total_counts (int) – Minimum number of counts for each chunk
 min_time_bins (int) – Minimum number of time bins
Returns: chunk_length – The length of the light curve chunks that satisfies the conditions
Return type: Examples
>>> import numpy as np >>> time = np.arange(150) >>> count = np.zeros_like(time) + 3 >>> lc = Lightcurve(time, count, dt=1) >>> lc.estimate_chunk_length(min_total_counts=10, min_time_bins=3) 4.0 >>> lc.estimate_chunk_length(min_total_counts=10, min_time_bins=5) 5.0 >>> count[2:4] = 1 >>> lc = Lightcurve(time, count, dt=1) >>> lc.estimate_chunk_length(min_total_counts=3, min_time_bins=1) 4.0

join
(other)[source]¶ Join two lightcurves into a single object.
The new
Lightcurve
object will contain time stamps from both the objects. Thecounts
andcountrate
attributes in the resulting object will contain the union of the nonoverlapping parts of the two individual objects, or the average in case of overlappingtime
arrays of bothLightcurve
objects.Good Time Intervals are also joined.
Note : Ideally, the
time
array of both lightcurves should not overlap.Parameters: other ( Lightcurve
object) – The otherLightcurve
object which is supposed to be joined with.Returns: lc_new – The resulting Lightcurve
object.Return type: Lightcurve
objectExamples
>>> time1 = [5, 10, 15] >>> count1 = [300, 100, 400] >>> time2 = [20, 25, 30] >>> count2 = [600, 1200, 800] >>> lc1 = Lightcurve(time1, count1, dt=5) >>> lc2 = Lightcurve(time2, count2, dt=5) >>> lc = lc1.join(lc2) >>> lc.time array([ 5, 10, 15, 20, 25, 30]) >>> lc.counts array([ 300, 100, 400, 600, 1200, 800])

static
make_lightcurve
(toa, dt, tseg=None, tstart=None, gti=None, mjdref=0, use_hist=False)[source]¶ Make a light curve out of photon arrival times, with a given time resolution
dt
. Note thatdt
should be larger than the native time resolution of the instrument that has taken the data.Parameters:  toa (iterable) – list of photon arrival times
 dt (float) – time resolution of the light curve (the bin width)
 tseg (float, optional, default
None
) –The total duration of the light curve. If this is
None
, then the total duration of the light curve will be the interval between the arrival between the first and the last photon intoa
.Note: Iftseg
is not divisible bydt
(i.e. iftseg
/dt
is not an integer number), then the last fractional bin will be dropped!  tstart (float, optional, default
None
) – The start time of the light curve. If this isNone
, the arrival time of the first photon will be used as the start time of the light curve.  gti (2d float array) –
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
Good Time Intervals  use_hist (bool) – Use
np.histogram
instead ofnp.bincounts
. Might be advantageous for very short datasets.
Returns: lc – A
Lightcurve
object with the binned light curveReturn type: Lightcurve
object

plot
(witherrors=False, labels=None, axis=None, title=None, marker='', save=False, filename=None)[source]¶ Plot the light curve using
matplotlib
.Plot the light curve object on a graph
self.time
on xaxis andself.counts
on yaxis withself.counts_err
optionally as error bars.Parameters:  witherrors (boolean, default False) – Whether to plot the Lightcurve with errorbars or not
 labels (iterable, default
None
) – A list of tuple withxlabel
andylabel
as strings.  axis (list, tuple, string, default
None
) – Parameter to set axis properties of thematplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.  title (str, default
None
) – The title of the plot.  marker (str, default '') – Line style and color of the plot. Line styles and colors are
combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options.  save (boolean, optional, default
False
) – IfTrue
, save the figure with specified filename.  filename (str) – File name of the image to save. Depends on the boolean
save
.

read
(filename, format_='pickle', default_err_dist='gauss')[source]¶ Read a
Lightcurve
object from file. Currently supported formats are pickle (not recommended for longterm storage)
 HDF5
 ASCII
Parameters: Other Parameters: default_err_dist (str, default=’gauss’) – Default error distribution if not specified in the file (e.g. for ASCII files). The default is ‘gauss’ just because it is likely that people using ASCII light curves will want to specify Gaussian error bars, if any.
Returns: lc –
 If
format\_
isascii
:astropy.table
is returned.  If
format\_
ishdf5
: dictionary with keyvalue pairs is returned.  If
format\_
ispickle
:Lightcurve
object is returned.
Return type: astropy.table
ordict
orLightcurve
object

rebin
(dt_new=None, f=None, method='sum')[source]¶ Rebin the light curve to a new time resolution. While the new resolution need not be an integer multiple of the previous time resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.
Parameters:  dt_new (float) – The new time resolution of the light curve. Must be larger than the time resolution of the old light curve!
 method ({
sum
mean
average
}, optional, defaultsum
) – This keyword argument sets whether the counts in the new bins should be summed or averaged.
Other Parameters: f (float) – the rebin factor. If specified, it substitutes
dt_new
withf*self.dt
Returns: lc_new – The
Lightcurve
object with the new, binned light curve.Return type: Lightcurve
object

shift
(time_shift)[source]¶ Shift the light curve and the GTIs in time.
Parameters: time_shift (float) – The time interval by which the light curve will be shifted (in the same units as the time array in Lightcurve
Returns: new_lc – The new LC shifted by time_shift
Return type: lightcurve.Lightcurve object

sort
(reverse=False)[source]¶ Sort a Lightcurve object by time.
A Lightcurve can be sorted in either increasing or decreasing order using this method. The time array gets sorted and the counts array is changed accordingly.
Parameters: reverse (boolean, default False) – If True then the object is sorted in reverse order. Examples
>>> time = [2, 1, 3] >>> count = [200, 100, 300] >>> lc = Lightcurve(time, count, dt=1) >>> lc_new = lc.sort() >>> lc_new.time array([1, 2, 3]) >>> lc_new.counts array([100, 200, 300])
Returns: lc_new – The Lightcurve
object with sorted time and counts arrays.Return type: Lightcurve
object

sort_counts
(reverse=False)[source]¶ Sort a
Lightcurve
object in accordance with its counts array.A
Lightcurve
can be sorted in either increasing or decreasing order using this method. The counts array gets sorted and the time array is changed accordingly.Parameters: reverse (boolean, default False
) – IfTrue
then the object is sorted in reverse order.Returns: lc_new – The Lightcurve
object with sortedtime
andcounts
arrays.Return type: Lightcurve
objectExamples
>>> time = [1, 2, 3] >>> count = [200, 100, 300] >>> lc = Lightcurve(time, count, dt=1) >>> lc_new = lc.sort_counts() >>> lc_new.time array([2, 1, 3]) >>> lc_new.counts array([100, 200, 300])

split
(min_gap, min_points=1)[source]¶ For data with gaps, it can sometimes be useful to be able to split the light curve into separate, evenly sampled objects along those data gaps. This method allows to do this: it finds data gaps of a specified minimum size, and produces a list of new Lightcurve objects for each contiguous segment.
Parameters:  min_gap (float) – The length of a data gap, in the same units as the time attribute of the Lightcurve object. Any smaller gaps will be ignored, any larger gaps will be identified and used to split the light curve.
 min_points (int, default 1) – The minimum number of data points in each light curve. Light curves with fewer data points will be ignored.
Returns: lc_split – The list of all contiguous light curves
Return type: iterable of Lightcurve objects
Example
>>> time = np.array([1, 2, 3, 6, 7, 8, 11, 12, 13]) >>> counts = np.random.rand(time.shape[0]) >>> lc = Lightcurve(time, counts, dt=1) >>> split_lc = lc.split(1.5)

split_by_gti
(min_points=2)[source]¶ Split the current
Lightcurve
object into a list ofLightcurve
objects, one for each continuous GTI segment as defined in thegti
attribute.Parameters: min_points (int, default 1) – The minimum number of data points in each light curve. Light curves with fewer data points will be ignored. Returns: list_of_lcs – A list of Lightcurve
objects, one for each GTI segmentReturn type: list

truncate
(start=0, stop=None, method='index')[source]¶ Truncate a
Lightcurve
object.This method takes a
start
and astop
point (either as indices, or as times in the same unit as those in thetime
attribute, and truncates all bins beforestart
and afterstop
, then returns a newLightcurve
object with the truncated light curve.Parameters:  start (int, default 0) – Index (or time stamp) of the starting point of the truncation. If no value is set
for the start point, then all points from the first element in the
time
array are taken into account.  stop (int, default
None
) – Index (or time stamp) of the ending point (exclusive) of the truncation. If no value of stop is set, then points including the last point in the counts array are taken in count.  method ({
index
time
}, optional, defaultindex
) – Type of the start and stop values. If set toindex
then the values are treated as indices of the counts array, or if set totime
, the values are treated as actual time values.
Returns: lc_new – The
Lightcurve
object with truncated time and counts arrays.Return type: Lightcurve
objectExamples
>>> time = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> count = [10, 20, 30, 40, 50, 60, 70, 80, 90] >>> lc = Lightcurve(time, count, dt=1) >>> lc_new = lc.truncate(start=2, stop=8) >>> lc_new.counts array([30, 40, 50, 60, 70, 80]) >>> lc_new.time array([3, 4, 5, 6, 7, 8]) >>> # Truncation can also be done by time values >>> lc_new = lc.truncate(start=6, method='time') >>> lc_new.time array([6, 7, 8, 9]) >>> lc_new.counts array([60, 70, 80, 90])
 start (int, default 0) – Index (or time stamp) of the starting point of the truncation. If no value is set
for the start point, then all points from the first element in the

write
(filename, format_='pickle', **kwargs)[source]¶ Write a
Lightcurve
object to file. Currently supported formats are pickle (not recommended for longterm storage)
 HDF5
 ASCII
Parameters:
EventList¶

class
stingray.events.
EventList
(time=None, energy=None, ncounts=None, mjdref=0, dt=0, notes='', gti=None, pi=None)[source]¶ Basic class for event list data. Event lists generally correspond to individual events (e.g. photons) recorded by the detector, and their associated properties. For Xray data where this type commonly occurs, events are time stamps of when a photon arrived in the detector, and (optionally) the photon energy associated with the event.
Parameters: time (iterable) – A list or array of time stamps
Other Parameters:  dt (float) – The time resolution of the events. Only relevant when using events to produce light curves with similar bin time.
 energy (iterable) – A list of array of photon energy values
 mjdref (float) – The MJD used as a reference for the time array.
 ncounts (int) – Number of desired data points in event list.
 gtis (
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
) – Good Time Intervals  pi (integer, numpy.ndarray) – PI channels
 notes (str) – Any useful annotations

time
¶ The array of event arrival times, in seconds from the reference MJD defined in
mjdref
Type: numpy.ndarray

energy
¶ The array of photon energy values
Type: numpy.ndarray

dt
¶ The time resolution of the events. Only relevant when using events to produce light curves with similar bin time.
Type: float

gtis
¶ Good Time Intervals
Type: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]

pi
¶ PI channels
Type: integer, numpy.ndarray

apply_deadtime
(deadtime, inplace=False, **kwargs)[source]¶ Apply deadtime filter to this event list.
Additional arguments in
kwargs
are passed to get_deadtime_maskParameters: Returns:  new_event_list (EventList object) – Filtered event list. if inplace is True, this is the input object filtered for deadtime, otherwise this is a new object.
 additional_output (object) – Only returned if return_all is True. See get_deadtime_mask for more details.
Examples
>>> events = np.array([1, 1.05, 1.07, 1.08, 1.1, 2, 2.2, 3, 3.1, 3.2]) >>> events = EventList(events) >>> events.pi=np.array([1, 2, 2, 2, 2, 1, 1, 1, 2, 1]) >>> events.energy=np.array([1, 2, 2, 2, 2, 1, 1, 1, 2, 1]) >>> events.mjdref = 10 >>> filt_events, retval = events.apply_deadtime(0.11, inplace=False, ... verbose=False, ... return_all=True) >>> filt_events is events False >>> expected = np.array([1, 2, 2.2, 3, 3.2]) >>> np.all(filt_events.time == expected) True >>> np.all(filt_events.pi == 1) True >>> np.all(filt_events.energy == 1) True >>> np.all(events.pi == 1) False >>> filt_events = events.apply_deadtime(0.11, inplace=True, ... verbose=False) >>> filt_events is events True

static
from_lc
(lc)[source]¶ Create an
EventList
from astingray.Lightcurve
object. Note that all events in a given time bin will have the same time stamp.Parameters: lc ( stingray.Lightcurve
object) – Light curve to use for creation of the event list.Returns: ev – The resulting list of photon arrival times generated from the light curve. Return type: EventList
object

join
(other)[source]¶ Join two
EventList
objects into one.If both are empty, an empty
EventList
is returned.GTIs are crossed if the event lists are over a common time interval, and appended otherwise.
pi
andpha
remainNone
if they areNone
in both. Otherwise, 0 is used as a default value for theEventList
where they were None.Parameters: other ( EventList
object) – The otherEventList
object which is supposed to be joined with.Returns: `ev_new` – The resulting EventList
object.Return type: EventList
object

static
read
(filename, format_='pickle')[source]¶ Read an event list from a file on disk. The file must be either a Python pickle file (not recommended for longterm storage), an HDF5 file, an ASCII or a FITS file. The file can have the following attributes in the data or metadata:
time
: the time stamps of the photon arrivalsenergy
: the photon energy corresponding to each time stampncounts
: the total number of photon counts recordedmjdref
: a reference time in Modified Julian Datedt
: the time resolution of the datanotes
: other possible metadatagti
: Good Time Intervalspi
: some instruments record energies as “Pulse Invariant”, an integer number recorded from the Pulse Height Amplitude
Parameters: Returns: ev – The
EventList
object reconstructed from fileReturn type: EventList
object

simulate_energies
(spectrum)[source]¶ Assign (simulate) energies to event list from a spectrum.
Parameters: spectrum (2d array or list) – Energies versus corresponding fluxes. The 2d array or list must have energies across the first dimension and fluxes across the second one.

simulate_times
(lc, use_spline=False, bin_time=None)[source]¶ Randomly assign (simulate) photon arrival times to an
EventList
from astingray.Lightcurve
object, using the acceptancerejection method.Parameters: lc (
stingray.Lightcurve
object) –Other Parameters:  use_spline (bool) – Approximate the light curve with a spline to avoid binning effects
 bin_time (float) – The bin time of the light curve, if it needs to be specified for improved precision
Returns: times – Simulated photon arrival times
Return type: arraylike

to_lc
(dt, tstart=None, tseg=None)[source]¶ Convert event list to a
stingray.Lightcurve
object.Parameters: dt (float) – Binning time of the light curve
Other Parameters:  tstart (float) – Start time of the light curve
 tseg (float) – Total duration of light curve
Returns: lc
Return type: stingray.Lightcurve
object
Fourier Products¶
These classes implement commonly used Fourier analysis products, most importantly Crossspectrum
and
Powerspectrum
, along with the variants for averaged cross/power spectra.
Crossspectrum¶

class
stingray.
Crossspectrum
(lc1=None, lc2=None, norm='none', gti=None, power_type='real')[source]¶ Make a cross spectrum from a (binned) light curve. You can also make an empty
Crossspectrum
object to populate with your own Fouriertransformed data (this can sometimes be useful when making binned power spectra).Parameters:  lc1 (
stingray.Lightcurve
object, optional, defaultNone
) – The first light curve data for the channel/band of interest.  lc2 (
stingray.Lightcurve
object, optional, defaultNone
) – The light curve data for the reference band.  norm ({
frac
,abs
,leahy
,none
}, defaultnone
) – The normalization of the (real part of the) cross spectrum.  power_type (string, optional, default
real
Parameter to choose among) –  real part and magnitude of the cross spectrum. (complete,) –
Other Parameters: gti (2d float array) –
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
– Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care!
freq
¶ The array of midbin frequencies that the Fourier transform samples
Type: numpy.ndarray

power
¶ The array of cross spectra (complex numbers)
Type: numpy.ndarray

power_err
¶ The uncertainties of
power
. An approximation for each bin given bypower_err= power/sqrt(m)
. Wherem
is the number of power averaged in each bin (by frequency binning, or averaging more than one spectra). Note that for a single realization (m=1
) the error is equal to the power.Type: numpy.ndarray

classical_significances
(threshold=1, trial_correction=False)[source]¶ Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chisquare distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.
Note that this function will only produce correct results when the following underlying assumptions are fulfilled:
 The power spectrum is Leahynormalized
 There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!
 There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pileup or dead time)
By default, the method produces
(index,pvalues)
for all powers in the power spectrum, where index is the numerical index of the power in question. If athreshold
is set, then only powers with pvalues below that threshold with their respective indices. Iftrial_correction
is set toTrue
, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.Parameters:  threshold (float, optional, default
1
) – The threshold to be used when reporting pvalues of potentially significant powers. Must be between 0 and 1. Default is1
(all pvalues will be reported).  trial_correction (bool, optional, default
False
) – A Boolean flag that sets whether thethreshold
will be corrected by the number of frequencies before being applied. This decreases thethreshold
(pvalues need to be lower to count as significant). Default isFalse
(report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!
Returns: pvals – A list of
(index, pvalue)
tuples for all powers that have pvalues lower than the threshold specified inthreshold
.Return type: iterable

coherence
()[source]¶ Compute Coherence function of the cross spectrum.
Coherence is defined in Vaughan and Nowak, 1996 [vaughan1996]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
Returns: coh – Coherence function Return type: numpy.ndarray References
[vaughan1996] http://iopscience.iop.org/article/10.1086/310430/pdf

plot
(labels=None, axis=None, title=None, marker='', save=False, filename=None)[source]¶ Plot the amplitude of the cross spectrum vs. the frequency using
matplotlib
.Parameters:  labels (iterable, default
None
) – A list of tuple withxlabel
andylabel
as strings.  axis (list, tuple, string, default
None
) – Parameter to set axis properties of thematplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.  title (str, default
None
) – The title of the plot.  marker (str, default '') – Line style and color of the plot. Line styles and colors are
combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options.  save (boolean, optional, default
False
) – IfTrue
, save the figure with specified filename.  filename (str) – File name of the image to save. Depends on the boolean
save
.
 labels (iterable, default

rebin
(df=None, f=None, method='mean')[source]¶ Rebin the cross spectrum to a new frequency resolution
df
.Parameters: df (float) – The new frequency resolution Other Parameters: f (float) – the rebin factor. If specified, it substitutes df with f*self.df
Returns: The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum
, it will return an object of classAveragedPowerspectrum
, too.Return type: bin_cs = Crossspectrum
(or one of its subclasses) object

rebin_log
(f=0.01)[source]¶ Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\]Parameters: f (float, optional, default 0.01
) – parameter that steers the frequency resolutionReturns: new_spec – The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum
, it will return an object of classReturn type: Crossspectrum
(or one of its subclasses) object
 lc1 (
Coherence¶
Convenience function to compute the coherence between two stingray.Lightcurve
objects.

stingray.
coherence
(lc1, lc2)[source]¶ Estimate coherence function of two light curves. For details on the definition of the coherence, see [vaughan1996].
Parameters:  lc1 (
stingray.Lightcurve
object) – The first light curve data for the channel of interest.  lc2 (
stingray.Lightcurve
object) – The light curve data for reference band
Returns: coh – The array of coherence versus frequency
Return type: np.ndarray
References
[vaughan1996] http://iopscience.iop.org/article/10.1086/310430/pdf  lc1 (
Powerspectrum¶

class
stingray.
Powerspectrum
(lc=None, norm='frac', gti=None)[source]¶ Make a
Powerspectrum
(also called periodogram) from a (binned) light curve. Periodograms can be normalized by either Leahy normalization, fractional rms normalizaation, absolute rms normalization, or not at all.You can also make an empty
Powerspectrum
object to populate with your own fouriertransformed data (this can sometimes be useful when making binned power spectra).Parameters:  lc (
stingray.Lightcurve
object, optional, defaultNone
) – The light curve data to be Fouriertransformed.  norm ({
leahy
frac
abs
none
}, optional, defaultfrac
) – The normaliation of the power spectrum to be used. Options areleahy
,frac
,abs
andnone
, default isfrac
.
Other Parameters: gti (2d float array) –
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
– Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care!
norm
¶ the normalization of the power spectrun
Type: { leahy
frac
abs
none
}

freq
¶ The array of midbin frequencies that the Fourier transform samples
Type: numpy.ndarray

power
¶ The array of normalized squared absolute values of Fourier amplitudes
Type: numpy.ndarray

power_err
¶ The uncertainties of
power
. An approximation for each bin given bypower_err= power/sqrt(m)
. Wherem
is the number of power averaged in each bin (by frequency binning, or averaging power spectrum). Note that for a single realization (m=1
) the error is equal to the power.Type: numpy.ndarray

_fourier_cross
(lc1, lc2)¶ Fourier transform the two light curves, then compute the cross spectrum. Computed as CS = lc1 x lc2* (where lc2 is the one that gets complexconjugated)
Parameters:  lc1 (
stingray.Lightcurve
object) – One light curve to be Fourier transformed. Ths is the band of interest or channel of interest.  lc2 (
stingray.Lightcurve
object) – Another light curve to be Fourier transformed. This is the reference band.
Returns: fr – The squared absolute value of the Fourier amplitudes
Return type: numpy.ndarray
 lc1 (

_make_auxil_pds
(lc1, lc2)¶ Helper method to create the power spectrum of both light curves independently.
Parameters: lc2 (lc1,) – Two light curves used for computing the cross spectrum.

_make_crossspectrum
(lc1, lc2)¶ Auxiliary method computing the normalized cross spectrum from two light curves. This includes checking for the presence of and applying Good Time Intervals, computing the unnormalized Fourier crossamplitude, and then renormalizing using the required normalization. Also computes an uncertainty estimate on the cross spectral powers.
Parameters: lc2 (lc1,) – Two light curves used for computing the cross spectrum.

_normalize_crossspectrum
(unnorm_power, tseg)¶ Normalize the real part of the cross spectrum to Leahy, absolute rms^2, fractional rms^2 normalization, or not at all.
Parameters:  unnorm_power (numpy.ndarray) – The unnormalized cross spectrum.
 tseg (int) – The length of the Fourier segment, in seconds.
Returns: power – The normalized cospectrum (real part of the cross spectrum). For ‘none’ normalization, imaginary part is returned as well.
Return type: numpy.nd.array

_phase_lag
()¶ Return the fourier phase lag of the cross spectrum.

_rms_error
(powers)[source]¶ Compute the error on the fractional rms amplitude using error propagation. Note: this uses the actual measured powers, which is not strictly correct. We should be using the underlying power spectrum, but in the absence of an estimate of that, this will have to do.
\[r = \sqrt{P}\]\[\delta r = \frac{1}{2 * \sqrt{P}} \delta P\]Parameters: powers (iterable) – The list of powers used to compute the fractional rms amplitude. Returns: delta_rms – The error on the fractional rms amplitude Return type: float

classical_significances
(threshold=1, trial_correction=False)[source]¶ Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chisquare distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.
Note that this function will only produce correct results when the following underlying assumptions are fulfilled:
 The power spectrum is Leahynormalized
 There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!
 There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pileup or dead time)
By default, the method produces
(index,pvalues)
for all powers in the power spectrum, where index is the numerical index of the power in question. If athreshold
is set, then only powers with pvalues below that threshold with their respective indices. Iftrial_correction
is set toTrue
, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.Parameters:  threshold (float, optional, default
1
) – The threshold to be used when reporting pvalues of potentially significant powers. Must be between 0 and 1. Default is1
(all pvalues will be reported).  trial_correction (bool, optional, default
False
) – A Boolean flag that sets whether thethreshold
will be corrected by the number of frequencies before being applied. This decreases thethreshold
(pvalues need to be lower to count as significant). Default isFalse
(report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!
Returns: pvals – A list of
(index, pvalue)
tuples for all powers that have pvalues lower than the threshold specified inthreshold
.Return type: iterable

coherence
()¶ Compute Coherence function of the cross spectrum.
Coherence is defined in Vaughan and Nowak, 1996 [vaughan1996]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
Returns: coh – Coherence function Return type: numpy.ndarray References
[vaughan1996] http://iopscience.iop.org/article/10.1086/310430/pdf

compute_rms
(min_freq, max_freq, white_noise_offset=0.0)[source]¶ Compute the fractional rms amplitude in the power spectrum between two frequencies.
Parameters: Other Parameters: white_noise_offset (float, default 0) – This is the white noise level, in Leahy normalization. In the ideal case, this is 2. Dead time and other instrumental effects can alter it. The user can fit the white noise level outside this function and it will get subtracted from powers here.
Returns:  rms (float) – The fractional rms amplitude contained between
min_freq
andmax_freq
 rms_err (float) – The error on the fractional rms amplitude
 rms (float) – The fractional rms amplitude contained between

plot
(labels=None, axis=None, title=None, marker='', save=False, filename=None)¶ Plot the amplitude of the cross spectrum vs. the frequency using
matplotlib
.Parameters:  labels (iterable, default
None
) – A list of tuple withxlabel
andylabel
as strings.  axis (list, tuple, string, default
None
) – Parameter to set axis properties of thematplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.  title (str, default
None
) – The title of the plot.  marker (str, default '') – Line style and color of the plot. Line styles and colors are
combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options.  save (boolean, optional, default
False
) – IfTrue
, save the figure with specified filename.  filename (str) – File name of the image to save. Depends on the boolean
save
.
 labels (iterable, default

rebin
(df=None, f=None, method='mean')[source]¶ Rebin the power spectrum.
Parameters: df (float) – The new frequency resolution Other Parameters: f (float) – the rebin factor. If specified, it substitutes df
withf*self.df
Returns: The newly binned power spectrum. Return type: bin_cs = Powerspectrum
object

rebin_log
(f=0.01)¶ Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\]Parameters: f (float, optional, default 0.01
) – parameter that steers the frequency resolutionReturns: new_spec – The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum
, it will return an object of classReturn type: Crossspectrum
(or one of its subclasses) object

time_lag
()¶ Calculate the fourier time lag of the cross spectrum. The time lag is calculate using the center of the frequency bins.
 lc (
AveragedCrossspectrum¶

class
stingray.
AveragedCrossspectrum
(lc1=None, lc2=None, segment_size=None, norm='none', gti=None, power_type='real')[source]¶ Make an averaged cross spectrum from a light curve by segmenting two light curves, Fouriertransforming each segment and then averaging the resulting cross spectra.
Parameters:  lc1 (
stingray.Lightcurve
object OR iterable ofstingray.Lightcurve
objects) – A light curve from which to compute the cross spectrum. In some cases, this would be the light curve of the wavelength/energy/frequency band of interest.  lc2 (
stingray.Lightcurve
object OR iterable ofstingray.Lightcurve
objects) – A second light curve to use in the cross spectrum. In some cases, this would be the wavelength/energy/frequency reference band to compare the band of interest with.  segment_size (float) – The size of each segment to average. Note that if the total
duration of each
Lightcurve
object inlc1
orlc2
is not an integer multiple of thesegment_size
, then any fraction leftover at the end of the time series will be lost. Otherwise you introduce artifacts.  norm ({
frac
,abs
,leahy
,none
}, defaultnone
) – The normalization of the (real part of the) cross spectrum.
Other Parameters:  gti (2d float array) –
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
– Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care!  power_type (string, optional, default
real
Parameter to choose among)  complete, real part and magnitude of the cross spectrum.

freq
¶ The array of midbin frequencies that the Fourier transform samples
Type: numpy.ndarray

power
¶ The array of cross spectra
Type: numpy.ndarray

power_err
¶ The uncertainties of
power
. An approximation for each bin given bypower_err= power/sqrt(m)
. Wherem
is the number of power averaged in each bin (by frequency binning, or averaging powerspectrum). Note that for a single realization (m=1
) the error is equal to the power.Type: numpy.ndarray

gti
¶ [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
– Good Time intervals. They are calculated by taking the common GTI between the two light curvesType: 2d float array

classical_significances
(threshold=1, trial_correction=False)¶ Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chisquare distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.
Note that this function will only produce correct results when the following underlying assumptions are fulfilled:
 The power spectrum is Leahynormalized
 There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!
 There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pileup or dead time)
By default, the method produces
(index,pvalues)
for all powers in the power spectrum, where index is the numerical index of the power in question. If athreshold
is set, then only powers with pvalues below that threshold with their respective indices. Iftrial_correction
is set toTrue
, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.Parameters:  threshold (float, optional, default
1
) – The threshold to be used when reporting pvalues of potentially significant powers. Must be between 0 and 1. Default is1
(all pvalues will be reported).  trial_correction (bool, optional, default
False
) – A Boolean flag that sets whether thethreshold
will be corrected by the number of frequencies before being applied. This decreases thethreshold
(pvalues need to be lower to count as significant). Default isFalse
(report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!
Returns: pvals – A list of
(index, pvalue)
tuples for all powers that have pvalues lower than the threshold specified inthreshold
.Return type: iterable

coherence
()[source]¶ Averaged Coherence function.
Coherence is defined in Vaughan and Nowak, 1996 [vaughan1996]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.
Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.
Returns: (coh, uncertainty) – Tuple comprising the coherence function and uncertainty. Return type: tuple of np.ndarray References
[vaughan1996] http://iopscience.iop.org/article/10.1086/310430/pdf

plot
(labels=None, axis=None, title=None, marker='', save=False, filename=None)¶ Plot the amplitude of the cross spectrum vs. the frequency using
matplotlib
.Parameters:  labels (iterable, default
None
) – A list of tuple withxlabel
andylabel
as strings.  axis (list, tuple, string, default
None
) – Parameter to set axis properties of thematplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.  title (str, default
None
) – The title of the plot.  marker (str, default '') – Line style and color of the plot. Line styles and colors are
combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options.  save (boolean, optional, default
False
) – IfTrue
, save the figure with specified filename.  filename (str) – File name of the image to save. Depends on the boolean
save
.
 labels (iterable, default

rebin
(df=None, f=None, method='mean')¶ Rebin the cross spectrum to a new frequency resolution
df
.Parameters: df (float) – The new frequency resolution Other Parameters: f (float) – the rebin factor. If specified, it substitutes df with f*self.df
Returns: The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum
, it will return an object of classAveragedPowerspectrum
, too.Return type: bin_cs = Crossspectrum
(or one of its subclasses) object

rebin_log
(f=0.01)¶ Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\]Parameters: f (float, optional, default 0.01
) – parameter that steers the frequency resolutionReturns: new_spec – The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum
, it will return an object of classReturn type: Crossspectrum
(or one of its subclasses) object
 lc1 (
AveragedPowerspectrum¶

class
stingray.
AveragedPowerspectrum
(lc=None, segment_size=None, norm='frac', gti=None)[source]¶ Make an averaged periodogram from a light curve by segmenting the light curve, Fouriertransforming each segment and then averaging the resulting periodograms.
Parameters:  lc (
stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve
objects) – The light curve data to be Fouriertransformed.  segment_size (float) – The size of each segment to average. Note that if the total
duration of each
Lightcurve
object in lc is not an integer multiple of thesegment_size
, then any fraction leftover at the end of the time series will be lost.  norm ({
leahy
frac
abs
none
}, optional, defaultfrac
) – The normaliation of the periodogram to be used.
Other Parameters: gti (2d float array) –
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
– Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care!
norm
¶ the normalization of the periodogram
Type: { leahy
frac
abs
none
}

freq
¶ The array of midbin frequencies that the Fourier transform samples
Type: numpy.ndarray

power
¶ The array of normalized squared absolute values of Fourier amplitudes
Type: numpy.ndarray

power_err
¶ The uncertainties of
power
. An approximation for each bin given bypower_err= power/sqrt(m)
. Wherem
is the number of power averaged in each bin (by frequency binning, or averaging powerspectrum). Note that for a single realization (m=1
) the error is equal to the power.Type: numpy.ndarray

classical_significances
(threshold=1, trial_correction=False)¶ Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chisquare distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.
Note that this function will only produce correct results when the following underlying assumptions are fulfilled:
 The power spectrum is Leahynormalized
 There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!
 There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pileup or dead time)
By default, the method produces
(index,pvalues)
for all powers in the power spectrum, where index is the numerical index of the power in question. If athreshold
is set, then only powers with pvalues below that threshold with their respective indices. Iftrial_correction
is set toTrue
, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.Parameters:  threshold (float, optional, default
1
) – The threshold to be used when reporting pvalues of potentially significant powers. Must be between 0 and 1. Default is1
(all pvalues will be reported).  trial_correction (bool, optional, default
False
) – A Boolean flag that sets whether thethreshold
will be corrected by the number of frequencies before being applied. This decreases thethreshold
(pvalues need to be lower to count as significant). Default isFalse
(report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!
Returns: pvals – A list of
(index, pvalue)
tuples for all powers that have pvalues lower than the threshold specified inthreshold
.Return type: iterable

coherence
()¶ Averaged Coherence function.
Coherence is defined in Vaughan and Nowak, 1996 [vaughan1996]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.
Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.
Returns: (coh, uncertainty) – Tuple comprising the coherence function and uncertainty. Return type: tuple of np.ndarray References
[vaughan1996] http://iopscience.iop.org/article/10.1086/310430/pdf

compute_rms
(min_freq, max_freq, white_noise_offset=0.0)¶ Compute the fractional rms amplitude in the power spectrum between two frequencies.
Parameters: Other Parameters: white_noise_offset (float, default 0) – This is the white noise level, in Leahy normalization. In the ideal case, this is 2. Dead time and other instrumental effects can alter it. The user can fit the white noise level outside this function and it will get subtracted from powers here.
Returns:  rms (float) – The fractional rms amplitude contained between
min_freq
andmax_freq
 rms_err (float) – The error on the fractional rms amplitude
 rms (float) – The fractional rms amplitude contained between

plot
(labels=None, axis=None, title=None, marker='', save=False, filename=None)¶ Plot the amplitude of the cross spectrum vs. the frequency using
matplotlib
.Parameters:  labels (iterable, default
None
) – A list of tuple withxlabel
andylabel
as strings.  axis (list, tuple, string, default
None
) – Parameter to set axis properties of thematplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.  title (str, default
None
) – The title of the plot.  marker (str, default '') – Line style and color of the plot. Line styles and colors are
combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options.  save (boolean, optional, default
False
) – IfTrue
, save the figure with specified filename.  filename (str) – File name of the image to save. Depends on the boolean
save
.
 labels (iterable, default

rebin
(df=None, f=None, method='mean')¶ Rebin the power spectrum.
Parameters: df (float) – The new frequency resolution Other Parameters: f (float) – the rebin factor. If specified, it substitutes df
withf*self.df
Returns: The newly binned power spectrum. Return type: bin_cs = Powerspectrum
object

rebin_log
(f=0.01)¶ Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\]Parameters: f (float, optional, default 0.01
) – parameter that steers the frequency resolutionReturns: new_spec – The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum
, it will return an object of classReturn type: Crossspectrum
(or one of its subclasses) object

time_lag
()¶ Calculate time lag and uncertainty.
Equation from Bendat & Piersol, 2011 [bendat2011]__.
Returns:  lag (np.ndarray) – The time lag
 lag_err (np.ndarray) – The uncertainty in the time lag
 lc (
Dynamical Powerspectrum¶

class
stingray.
DynamicalPowerspectrum
(lc, segment_size, norm='frac', gti=None)[source]¶ Create a dynamical power spectrum, also often called a spectrogram.
This class will divide a
Lightcurve
object into segments of lengthsegment_size
, create a power spectrum for each segment and store all powers in a matrix as a function of both time (using the midpoint of each segment) and frequency.This is often used to trace changes in period of a (quasi)periodic signal over time.
Parameters:  lc (
stingray.Lightcurve
object) – The time series of which the Dynamical powerspectrum is to be calculated.  segment_size (float, default 1) – Length of the segment of light curve, default value is 1 (in whatever units
the
time
array in theLightcurve`
object uses).  norm ({
leahy
frac
abs
none
}, optional, defaultfrac
) – The normaliation of the periodogram to be used.
Other Parameters: gti (2d float array) –
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
– Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care!
segment_size
¶ The size of each segment to average. Note that if the total duration of each Lightcurve object in lc is not an integer multiple of the
segment_size
, then any fraction leftover at the end of the time series will be lost.Type: float

dyn_ps
¶ The matrix of normalized squared absolute values of Fourier amplitudes. The axis are given by the
freq
andtime
attributesType: np.ndarray

norm
¶ the normalization of the periodogram
Type: { leahy
frac
abs
none
}

freq
¶ The array of midbin frequencies that the Fourier transform samples
Type: numpy.ndarray

classical_significances
(threshold=1, trial_correction=False)¶ Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chisquare distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.
Note that this function will only produce correct results when the following underlying assumptions are fulfilled:
 The power spectrum is Leahynormalized
 There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!
 There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pileup or dead time)
By default, the method produces
(index,pvalues)
for all powers in the power spectrum, where index is the numerical index of the power in question. If athreshold
is set, then only powers with pvalues below that threshold with their respective indices. Iftrial_correction
is set toTrue
, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.Parameters:  threshold (float, optional, default
1
) – The threshold to be used when reporting pvalues of potentially significant powers. Must be between 0 and 1. Default is1
(all pvalues will be reported).  trial_correction (bool, optional, default
False
) – A Boolean flag that sets whether thethreshold
will be corrected by the number of frequencies before being applied. This decreases thethreshold
(pvalues need to be lower to count as significant). Default isFalse
(report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!
Returns: pvals – A list of
(index, pvalue)
tuples for all powers that have pvalues lower than the threshold specified inthreshold
.Return type: iterable

coherence
()¶ Averaged Coherence function.
Coherence is defined in Vaughan and Nowak, 1996 [vaughan1996]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.
Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.
Returns: (coh, uncertainty) – Tuple comprising the coherence function and uncertainty. Return type: tuple of np.ndarray References
[vaughan1996] http://iopscience.iop.org/article/10.1086/310430/pdf

compute_rms
(min_freq, max_freq, white_noise_offset=0.0)¶ Compute the fractional rms amplitude in the power spectrum between two frequencies.
Parameters: Other Parameters: white_noise_offset (float, default 0) – This is the white noise level, in Leahy normalization. In the ideal case, this is 2. Dead time and other instrumental effects can alter it. The user can fit the white noise level outside this function and it will get subtracted from powers here.
Returns:  rms (float) – The fractional rms amplitude contained between
min_freq
andmax_freq
 rms_err (float) – The error on the fractional rms amplitude
 rms (float) – The fractional rms amplitude contained between

plot
(labels=None, axis=None, title=None, marker='', save=False, filename=None)¶ Plot the amplitude of the cross spectrum vs. the frequency using
matplotlib
.Parameters:  labels (iterable, default
None
) – A list of tuple withxlabel
andylabel
as strings.  axis (list, tuple, string, default
None
) – Parameter to set axis properties of thematplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.  title (str, default
None
) – The title of the plot.  marker (str, default '') – Line style and color of the plot. Line styles and colors are
combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options.  save (boolean, optional, default
False
) – IfTrue
, save the figure with specified filename.  filename (str) – File name of the image to save. Depends on the boolean
save
.
 labels (iterable, default

rebin
(df=None, f=None, method='mean')¶ Rebin the power spectrum.
Parameters: df (float) – The new frequency resolution Other Parameters: f (float) – the rebin factor. If specified, it substitutes df
withf*self.df
Returns: The newly binned power spectrum. Return type: bin_cs = Powerspectrum
object

rebin_frequency
(df_new, method='sum')[source]¶ Rebin the Dynamic Power Spectrum to a new frequency resolution. Rebinning is an inplace operation, i.e. will replace the existing
dyn_ps
attribute.While the new resolution need not be an integer multiple of the previous frequency resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.
Parameters:  df_new (float) – The new frequency resolution of the Dynamical Power Spectrum. Must be larger than the frequency resolution of the old Dynamical Power Spectrum!
 method ({
sum
mean
average
}, optional, defaultsum
) – This keyword argument sets whether the counts in the new bins should be summed or averaged.

rebin_log
(f=0.01)¶ Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\]Parameters: f (float, optional, default 0.01
) – parameter that steers the frequency resolutionReturns: new_spec – The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum
, it will return an object of classReturn type: Crossspectrum
(or one of its subclasses) object

rebin_time
(dt_new, method='sum')[source]¶ Rebin the Dynamic Power Spectrum to a new time resolution. While the new resolution need not be an integer multiple of the previous time resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.
Parameters:  dt_new (float) – The new time resolution of the Dynamical Power Spectrum. Must be larger than the time resolution of the old Dynamical Power Spectrum!
 method ({"sum"  "mean"  "average"}, optional, default "sum") – This keyword argument sets whether the counts in the new bins should be summed or averaged.
Returns:  time_new (numpy.ndarray) – Time axis with new rebinned time resolution.
 dynspec_new (numpy.ndarray) – New rebinned Dynamical Power Spectrum.

time_lag
()¶ Calculate time lag and uncertainty.
Equation from Bendat & Piersol, 2011 [bendat2011]__.
Returns:  lag (np.ndarray) – The time lag
 lag_err (np.ndarray) – The uncertainty in the time lag

trace_maximum
(min_freq=None, max_freq=None, sigmaclip=False)[source]¶ Return the indices of the maximum powers in each segment
Powerspectrum
between specified frequencies.Parameters:  min_freq (float, default
None
) – The lower frequency bound.  max_freq (float, default
None
) – The upper frequency bound.
Returns: max_positions – The array of indices of the maximum power in each segment having frequency between
min_freq
andmax_freq
.Return type: np.array
 min_freq (float, default
 lc (
CrossCorrelation¶

class
stingray.
CrossCorrelation
(lc1=None, lc2=None, mode='same')[source]¶ Make a crosscorrelation from a light curves.
You can also make an empty
Crosscorrelation
object to populate with your own crosscorrelation data.Parameters:  lc1 (
stingray.Lightcurve
object, optional, defaultNone
) – The first light curve data for correlation calculations.  lc2 (
stingray.Lightcurve
object, optional, defaultNone
) – The light curve data for the correlation calculations.  mode ({
full
,valid
,same
}, optional, defaultsame
) – A string indicating the size of the correlation output. See the relevantscipy
documentation [scipydocs] for more details.

lc1
¶ The first light curve data for correlation calculations.
Type: stingray.Lightcurve

lc2
¶ The light curve data for the correlation calculations.
Type: stingray.Lightcurve

corr
¶ An array of correlation data calculated from two light curves
Type: numpy.ndarray

time_lags
¶ An array of all possible time lags against which each point in corr is calculated
Type: numpy.ndarray

time_shift
¶ Time lag that gives maximum value of correlation between two light curves. There will be maximum correlation between light curves if one of the light curve is shifted by
time_shift
.Type: float

auto
¶ An internal flag to indicate whether this is a crosscorrelation or an autocorrelation.
Type: bool
References
[scipydocs] https://docs.scipy.org/doc/scipy0.19.0/reference/generated/scipy.signal.correlate.html 
cal_timeshift
(dt=1.0)[source]¶ Calculate the cross correlation against all possible time lags, both positive and negative.
Parameters: dt (float, optional, default 1.0
) – Time resolution of the light curve, should be passed when object is populated with correlation data and no information about light curve can be extracted. Used to calculatetime_lags
.Returns:  self.time_shift (float) – Value of the time lag that gives maximum value of correlation between two light curves.
 self.time_lags (numpy.ndarray) – An array of
time_lags
calculated from correlation data

plot
(labels=None, axis=None, title=None, marker='', save=False, filename=None, ax=None)[source]¶ Plot the
Crosscorrelation
as function using Matplotlib. Plot the Crosscorrelation object on a graphself.time_lags
on xaxis andself.corr
on yaxisParameters:  labels (iterable, default
None
) – A list of tuple withxlabel
andylabel
as strings.  axis (list, tuple, string, default
None
) – Parameter to set axis properties ofmatplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument formatplotlib.pyplot.axis()
function.  title (str, default
None
) – The title of the plot.  marker (str, default

) – Line style and color of the plot. Line styles and colors are combined in a single format string, as in'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options.  save (boolean, optional (default=False)) – If True, save the figure with specified filename.
 filename (str) – File name of the image to save. Depends on the boolean
save
.  ax (
matplotlib.Axes
object) – An axes object to fill with the cross correlation plot.
 labels (iterable, default
 lc1 (
AutoCorrelation¶

class
stingray.
AutoCorrelation
(lc=None, mode='same')[source]¶ Make an autocorrelation from a light curve. You can also make an empty Autocorrelation object to populate with your own autocorrelation data.
Parameters:  lc (
stingray.Lightcurve
object, optional, defaultNone
) – The light curve data for correlation calculations.  mode ({
full
,valid
,same
}, optional, defaultsame
) – A string indicating the size of the correlation output. See the relevantscipy
documentation [scipydocs] for more details.

lc1, lc2
The light curve data for correlation calculations.
Type: stingray.Lightcurve

corr
¶ An array of correlation data calculated from lightcurve data
Type: numpy.ndarray

time_lags
¶ An array of all possible time lags against which each point in corr is calculated
Type: numpy.ndarray

cal_timeshift
(dt=1.0)¶ Calculate the cross correlation against all possible time lags, both positive and negative.
Parameters: dt (float, optional, default 1.0
) – Time resolution of the light curve, should be passed when object is populated with correlation data and no information about light curve can be extracted. Used to calculatetime_lags
.Returns:  self.time_shift (float) – Value of the time lag that gives maximum value of correlation between two light curves.
 self.time_lags (numpy.ndarray) – An array of
time_lags
calculated from correlation data

plot
(labels=None, axis=None, title=None, marker='', save=False, filename=None, ax=None)¶ Plot the
Crosscorrelation
as function using Matplotlib. Plot the Crosscorrelation object on a graphself.time_lags
on xaxis andself.corr
on yaxisParameters:  labels (iterable, default
None
) – A list of tuple withxlabel
andylabel
as strings.  axis (list, tuple, string, default
None
) – Parameter to set axis properties ofmatplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument formatplotlib.pyplot.axis()
function.  title (str, default
None
) – The title of the plot.  marker (str, default

) – Line style and color of the plot. Line styles and colors are combined in a single format string, as in'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options.  save (boolean, optional (default=False)) – If True, save the figure with specified filename.
 filename (str) – File name of the image to save. Depends on the boolean
save
.  ax (
matplotlib.Axes
object) – An axes object to fill with the cross correlation plot.
 labels (iterable, default
 lc (
HigherOrder Fourier and Spectral Timing Products¶
These classes implement higherorder Fourier analysis products (e.g. Bispectrum
) and
Spectral Timing related methods taking advantage of both temporal and spectral information in
modern data sets.
Bispectrum¶

class
stingray.bispectrum.
Bispectrum
(lc, maxlag=None, window=None, scale='biased')[source]¶ Makes a
Bispectrum
object from astingray.Lightcurve
.Bispectrum
is a higher order time series analysis method and is calculated by indirect method as Fourier transform of triple autocorrelation function also called as 3rd order cumulant.Parameters:  lc (
stingray.Lightcurve
object) – The light curve data for bispectrum calculation.  maxlag (int, optional, default
None
) – Maximum lag on both positive and negative sides of 3rd order cumulant (Similar to lags in correlation). ifNone
, max lag is set to onehalf of length of light curve.  window ({
uniform
,parzen
,hamming
,hanning
,triangular
,welch
,blackman
,flattop
}, optional, default ‘uniform’) – Type of window function to apply to the data.  scale ({
biased
,unbiased
}, optional, defaultbiased
) – Flag to decide biased or unbiased normalization for 3rd order cumulant function.

lc
¶ The light curve data to compute the
Bispectrum
.Type: stingray.Lightcurve
object

maxlag
¶ Maximum lag on both positive and negative sides of 3rd order cumulant (similar to lags in correlation)
Type: int

signal
¶ Row vector of light curve counts for matrix operations
Type: numpy.ndarray

scale
¶ Flag to decide biased or unbiased normalization for 3rd order cumulant function.
Type: { biased
,unbiased
}

lags
¶ An array of time lags for which 3rd order cumulant is calculated
Type: numpy.ndarray

freq
¶ An array of freq values for
Bispectrum
.Type: numpy.ndarray

cum3
¶ A
maxlag*2+1 x maxlag*2+1
matrix containing 3rd order cumulant data for different lags.Type: numpy.ndarray

bispec
¶ A`` maxlag*2+1 x maxlag*2+1`` matrix containing bispectrum data for different frequencies.
Type: numpy.ndarray

bispec_mag
¶ Magnitude of the bispectrum
Type: numpy.ndarray

bispec_phase
¶ Phase of the bispectrum
Type: numpy.ndarray
References
1) The biphase explained: understanding the asymmetries invcoupled Fourier components of astronomical timeseries by Thomas J. Maccarone Department of Physics, Box 41051, Science Building, Texas Tech University, Lubbock TX 794091051 School of Physics and Astronomy, University of Southampton, SO16 4ES
2) T. S. Rao, M. M. Gabr, An Introduction to Bispectral Analysis and Bilinear Time Series Models, Lecture Notes in Statistics, Volume 24, D. Brillinger, S. Fienberg, J. Gani, J. Hartigan, K. Krickeberg, Editors, SpringerVerlag, New York, NY, 1984.
3) Matlab version of bispectrum under following link. https://www.mathworks.com/matlabcentral/fileexchange/60bisp3cum
Examples
>> from stingray.lightcurve import Lightcurve >> from stingray.bispectrum import Bispectrum >> lc = Lightcurve([1,2,3,4,5],[2,3,1,1,2]) >> bs = Bispectrum(lc,maxlag=1) >> bs.lags array([1., 0., 1.]) >> bs.freq array([0.5, 0. , 0.5]) >> bs.cum3 array([[0.2976, 0.1024, 0.1408], [ 0.1024, 0.144 , 0.2976], [ 0.1408, 0.2976, 0.1024]]) >> bs.bispec_mag array([[ 1.26336794, 0.0032 , 0.0032 ], [ 0.0032 , 0.16 , 0.0032 ], [ 0.0032 , 0.0032 , 1.26336794]]) >> bs.bispec_phase array([[ 9.65946229e01, 2.25347190e14, 3.46944695e14], [ 0.00000000e+00, 3.14159265e+00, 0.00000000e+00], [ 3.46944695e14, 2.25347190e14, 9.65946229e01]])

plot_cum3
(axis=None, save=False, filename=None)[source]¶ Plot the 3rd order cumulant as function of time lags using
matplotlib
. Plot thecum3
attribute on a graph with thelags
attribute on xaxis and yaxis andcum3
on zaxisParameters:  axis (list, tuple, string, default
None
) – Parameter to set axis properties ofmatplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument formatplotlib.pyplot.axis()
method.  save (bool, optionalm, default
False
) – IfTrue
, save the figure with specified filename.  filename (str) – File name and path of the image to save. Depends on the boolean
save
.
Returns: plt – Reference to plot, call
show()
to display itReturn type: matplotlib.pyplot
object axis (list, tuple, string, default

plot_mag
(axis=None, save=False, filename=None)[source]¶ Plot the magnitude of bispectrum as function of freq using
matplotlib
. Plot thebispec_mag
attribute on a graph withfreq
attribute on the xaxis and yaxis and thebispec_mag
attribute on the zaxis.Parameters:  axis (list, tuple, string, default
None
) – Parameter to set axis properties ofmatplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument formatplotlib.pyplot.axis()
method.  save (bool, optional, default
False
) – IfTrue
, save the figure with specified filename and path.  filename (str) – File name and path of the image to save. Depends on the bool
save
.
Returns: plt – Reference to plot, call
show()
to display itReturn type: matplotlib.pyplot
object axis (list, tuple, string, default

plot_phase
(axis=None, save=False, filename=None)[source]¶ Plot the phase of bispectrum as function of freq using
matplotlib
. Plot thebispec_phase
attribute on a graph withphase
attribute on the xaxis and yaxis and thebispec_phase
attribute on the zaxis.Parameters:  axis (list, tuple, string, default
None
) – Parameter to set axis properties ofmatplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument formatplotlib.pyplot.axis()
function.  save (bool, optional, default
False
) – IfTrue
, save the figure with specified filename and path.  filename (str) – File name and path of the image to save. Depends on the bool
save
.
Returns: plt – Reference to plot, call
show()
to display itReturn type: matplotlib.pyplot
object axis (list, tuple, string, default
 lc (
Covariancespectrum¶

class
stingray.
Covariancespectrum
(data, dt=None, band_interest=None, ref_band_interest=None, std=None)[source]¶ Compute a covariance spectrum for the data. The input data can be either in event data or premade light curves. Event data can either be in the form of a
numpy.ndarray
with(time stamp, energy)
pairs or astingray.events.EventList
object. If light curves are formed ahead of time, then a list ofstingray.Lightcurve
objects should be passed to the object, ideally one light curve for each band of interest.For the case where the data is input as a list of
stingray.Lightcurve
objects, the reference band(s) should either be a single
stingray.Lightcurve
object,  a list of
stingray.Lightcurve
objects with the reference band for each band of interest premade, or None
, in which case reference bands will formed by combining all light curves except for the band of interest.
In the case of event data,
band_interest
andref_band_interest
can be (multiple) pairs of energies, and the light curves for the bands of interest and reference bands will be produced dynamically.Parameters:  data ({
numpy.ndarray
stingray.events.EventList
object  list ofstingray.Lightcurve
objects}) –data
contains the time series data, either in the form of a 2D array of(time stamp, energy)
pairs for event data, or as a list of light curves. Note : The event list must be in sorted order with respect to the times of arrivals.  dt (float) – The time resolution of the
stingray.Lightcurve
formed from the energy bin. Only used ifdata
is an event list.  band_interest ({
None
, iterable of tuples}) – IfNone
, all possible energy values will be assumed to be of interest, and a covariance spectrum in the highest resolution will be produced. Note: if the input is a list ofstingray.Lightcurve
objects, then the user may supply their energy values here, for construction of a reference band.  ref_band_interest ({
None
, tuple,stingray.Lightcurve
, list ofstingray.Lightcurve
objects}) – Defines the reference band to be used for comparison with the bands of interest. IfNone
, all bands except the band of interest will be used for each band of interest, respectively. Alternatively, a tuple can be given for event list data, which will extract the reference band (always excluding the band of interest), or one may put in a singlestingray.Lightcurve
object to be used (the same for each band of interest) or a list ofstingray.Lightcurve
objects, one for each band of interest.  std (float or np.array or list of numbers) – The term
std
is used to calculate the excess variance of a band. Ifstd
is set toNone
, default Poisson case is taken and the std is calculated asmean(lc)**0.5
. In the case of a single float as input, the same is used as the standard deviation which is also used as the std. And if the std is an iterable of numbers, their mean is used for the same purpose.

unnorm_covar
¶ An array of arrays with mid point
band_interest
and their covariance. It is the arrayform of the dictionaryenergy_covar
. The covariance values are unnormalized.Type: np.ndarray

covar
¶ Normalized covariance spectrum.
Type: np.ndarray

covar_error
¶ Errors of the normalized covariance spectrum.
Type: np.ndarray
References
[1] Wilkinson, T. and Uttley, P. (2009), Accretion disc variability in the hard state of black hole Xray binaries. Monthly Notices of the Royal Astronomical Society, 397: 666–676. doi: 10.1111/j.13652966.2009.15008.x
Examples
See the notebooks repository for detailed notebooks on the code.
 a single
AveragedCovariancespectrum¶

class
stingray.
AveragedCovariancespectrum
(data, segment_size, dt=None, band_interest=None, ref_band_interest=None, std=None)[source]¶ Compute a covariance spectrum for the data, defined in [covar spectrum]_ Equation 15.
Parameters:  data ({numpy.ndarray  list of
stingray.Lightcurve
objects}) –data
contains the time series data, either in the form of a 2D array of(time stamp, energy)
pairs for event data, or as a list ofstingray.Lightcurve
objects. Note : The event list must be in sorted order with respect to the times of arrivals.  segment_size (float) – The length of each segment in the averaged covariance spectrum. The number of segments will be calculated automatically using the total length of the data set and the segment_size defined here.
 dt (float) – The time resolution of the
stingray.Lightcurve
formed from the energy bin. Only used if data is an event list.  band_interest ({
None
, iterable of tuples}) – IfNone
, all possible energy values will be assumed to be of interest, and a covariance spectrum in the highest resolution will be produced. Note: if the input is a list ofstingray.Lightcurve
objects, then the user may supply their energy values here, for construction of a reference band.  ref_band_interest ({None, tuple,
stingray.Lightcurve
, list ofstingray.Lightcurve
objects}) – Defines the reference band to be used for comparison with the bands of interest. If None, all bands except the band of interest will be used for each band of interest, respectively. Alternatively, a tuple can be given for event list data, which will extract the reference band (always excluding the band of interest), or one may put in a singlestingray.Lightcurve
object to be used (the same for each band of interest) or a list ofstingray.Lightcurve
objects, one for each band of interest.  std (float or np.array or list of numbers) – The term
std
is used to calculate the excess variance of a band. Ifstd
is set toNone
, default Poisson case is taken and thestd
is calculated asmean(lc)**0.5
. In the case of a single float as input, the same is used as the standard deviation which is also used as the std. And if the std is an iterable of numbers, their mean is used for the same purpose.

unnorm_covar
¶ An array of arrays with mid point band_interest and their covariance. It is the arrayform of the dictionary
energy_covar
. The covariance values are unnormalized.Type: np.ndarray

covar
¶ Normalized covariance spectrum.
Type: np.ndarray

covar_error
¶ Errors of the normalized covariance spectrum.
Type: np.ndarray
References
 data ({numpy.ndarray  list of
VarEnergySpectrum¶
Abstract base class for spectral timing products including both variability and spectral information.

class
stingray.varenergyspectrum.
VarEnergySpectrum
(events, freq_interval, energy_spec, ref_band=None, bin_time=1, use_pi=False, segment_size=None, events2=None)[source]¶ Base class for variabilityenergy spectrum.
This class is only a base for the various variability spectra, and it’s not to be instantiated by itself.
Parameters:  events (
stingray.events.EventList
object) – event list  freq_interval (
[f0, f1]
, floats) – the frequency range over which calculating the variability quantity  energy_spec (list or tuple
(emin, emax, N, type)
) – if alist
is specified, this is interpreted as a list of bin edges; if atuple
is provided, this will encode the minimum and maximum energies, the number of intervals, andlin
orlog
.
Other Parameters:  ref_band (
[emin, emax
], floats; defaultNone
) – minimum and maximum energy of the reference band. IfNone
, the full band is used.  use_pi (bool, default
False
) – Use channel instead of energy  events2 (
stingray.events.EventList
object) – event list for the second channel, if not the same. Useful if the reference band has to be taken from another detector.

events1
¶ list of events used to produce the spectrum
Type: arraylike

events2
¶ if the spectrum requires it, second list of events
Type: arraylike

freq_interval
¶ interval of frequencies used to calculate the spectrum
Type: arraylike

energy_intervals
¶ energy intervals used for the spectrum
Type: [[e00, e01], [e10, e11], ...]

spectrum
¶ the spectral values, corresponding to each energy interval
Type: arraylike

spectrum_error
¶ the error bars corresponding to spectrum
Type: arraylike
 events (
RmsEnergySpectrum¶

class
stingray.varenergyspectrum.
RmsEnergySpectrum
(events, freq_interval, energy_spec, ref_band=None, bin_time=1, use_pi=False, segment_size=None, events2=None)[source]¶ Calculate the rmsEnergy spectrum.
For each energy interval, calculate the power density spectrum in fractional r.m.s. normalization. If
events2
is specified, the cospectrum is used instead of the PDS.Parameters:  events (
stingray.events.EventList
object) – event list  freq_interval (
[f0, f1]
, list of float) – the frequency range over which calculating the variability quantity  energy_spec (list or tuple
(emin, emax, N, type)
) – if alist
is specified, this is interpreted as a list of bin edges; if atuple
is provided, this will encode the minimum and maximum energies, the number of intervals, andlin
orlog
.
Other Parameters:  ref_band (
[emin, emax]
, float; defaultNone
) – minimum and maximum energy of the reference band. IfNone
, the full band is used.  use_pi (bool, default
False
) – Use channel instead of energy  events2 (
stingray.events.EventList
object) – event list for the second channel, if not the same. Useful if the reference band has to be taken from another detector.

events1
¶ list of events used to produce the spectrum
Type: arraylike

events2
¶ if the spectrum requires it, second list of events
Type: arraylike

freq_interval
¶ interval of frequencies used to calculate the spectrum
Type: arraylike

energy_intervals
¶ energy intervals used for the spectrum
Type: [[e00, e01], [e10, e11], ...]

spectrum
¶ the spectral values, corresponding to each energy interval
Type: arraylike

spectrum_error
¶ the errorbars corresponding to spectrum
Type: arraylike
 events (
LagEnergySpectrum¶

class
stingray.varenergyspectrum.
LagEnergySpectrum
(events, freq_interval, energy_spec, ref_band=None, bin_time=1, use_pi=False, segment_size=None, events2=None)[source]¶ Calculate the lagenergy spectrum.
For each energy interval, calculate the mean lag in the specified frequency range. If
events2
is specified, the reference band is taken from the second event list.Parameters:  events (
stingray.events.EventList
object) – event list  freq_interval (
[f0, f1]
, list of float) – the frequency range over which calculating the variability quantity  energy_spec (list or tuple
(emin, emax, N, type)
) – if a list is specified, this is interpreted as a list of bin edges; if a tuple is provided, this will encode the minimum and maximum energies, the number of intervals, andlin
orlog
.
Other Parameters:  ref_band (
[emin, emax]
, float; defaultNone
) – minimum and maximum energy of the reference band. IfNone
, the full band is used.  use_pi (bool, default
False
) – Use channel instead of energy  events2 (
stingray.events.EventList
object) – event list for the second channel, if not the same. Useful if the reference band has to be taken from another detector.

events1
¶ list of events used to produce the spectrum
Type: arraylike

events2
¶ if the spectrum requires it, second list of events
Type: arraylike

freq_interval
¶ interval of frequencies used to calculate the spectrum
Type: arraylike

energy_intervals
¶ energy intervals used for the spectrum
Type: [[e00, e01], [e10, e11], ...]

spectrum
¶ the spectral values, corresponding to each energy interval
Type: arraylike

spectrum_error
¶ the errorbars corresponding to spectrum
Type: arraylike
 events (
ExcessVarianceSpectrum¶

class
stingray.varenergyspectrum.
ExcessVarianceSpectrum
(events, freq_interval, energy_spec, bin_time=1, use_pi=False, segment_size=None, normalization='fvar')[source]¶ Calculate the Excess Variance spectrum.
For each energy interval, calculate the excess variance in the specified frequency range.
Parameters:  events (
stingray.events.EventList
object) – event list  freq_interval (
[f0, f1]
, list of float) – the frequency range over which calculating the variability quantity  energy_spec (list or tuple
(emin, emax, N, type)
) – if a list is specified, this is interpreted as a list of bin edges; if a tuple is provided, this will encode the minimum and maximum energies, the number of intervals, andlin
orlog
.
Other Parameters:  ref_band (
[emin, emax]
, floats; defaultNone
) – minimum and maximum energy of the reference band. IfNone
, the full band is used.  use_pi (bool, default
False
) – Use channel instead of energy

events1
¶ list of events used to produce the spectrum
Type: arraylike

freq_interval
¶ interval of frequencies used to calculate the spectrum
Type: arraylike

energy_intervals
¶ energy intervals used for the spectrum
Type: [[e00, e01], [e10, e11], ...]

spectrum
¶ the spectral values, corresponding to each energy interval
Type: arraylike

spectrum_error
¶ the errorbars corresponding to spectrum
Type: arraylike
 events (
Utilities¶
Commonly used utility functionality, including Good Time Interval operations and input/output helper methods.
GTI Functionality¶

stingray.gti.
load_gtis
(fits_file, gtistring=None)[source]¶ Load Good Time Intervals (GTIs) from
HDU EVENTS
of filefits_file
. File is expected to be in FITS format.Parameters: Returns: gti_list – A list of GTI
(start, stop)
pairs extracted from the FITS file.Return type:

stingray.gti.
check_gtis
(gti)[source]¶ Check if GTIs are wellbehaved.
Check that:
 the shape of the GTI array is correct;
 no start > end
 no overlaps.
Parameters: gti (list) – A list of GTI
(start, stop)
pairs extracted from the FITS file.Raises: TypeError
– If GTIs are of the wrong shapeValueError
– If GTIs have overlapping or displaced values

stingray.gti.
create_gti_mask
(time, gtis, safe_interval=None, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001)[source]¶ Create GTI mask.
Assumes that no overlaps are present between GTIs
Parameters:  time (numpy.ndarray) – An array of time stamps
 gtis (
[[g0_0, g0_1], [g1_0, g1_1], ...]
, float arraylike) – The list of GTIs
Other Parameters:  safe_interval (float or
[float, float]
, default None) – A safe interval to exclude at both ends (if single float) or the start and the end (if pair of values) of GTIs. If None, no safe interval is applied to data.  min_length (float) – An optional minimum length for the GTIs to be applied. Only GTIs longer
than
min_length
will be considered when creating the mask.  return_new_gtis (bool) – If
True`
, return the list of new GTIs (ifmin_length > 0
)  dt (float) – Time resolution of the data, i.e. the interval between time stamps
 epsilon (float) – fraction of
dt
that is tolerated at the borders of a GTI
Returns:  mask (bool array) – A mask labelling all time stamps that are included in the GTIs versus those that are not.
 new_gtis (
Nx2
array) – An array of new GTIs created by this function

stingray.gti.
create_gti_mask_complete
(time, gtis, safe_interval=0, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001)[source]¶ Create GTI mask, allowing for nonconstant
dt
.Assumes that no overlaps are present between GTIs
Parameters:  time (numpy.ndarray) – An array of time stamps
 gtis (
[[g0_0, g0_1], [g1_0, g1_1], ...]
, float arraylike) – The list of GTIs
Other Parameters:  safe_interval (float or [float, float]) – A safe interval to exclude at both ends (if single float) or the start and the end (if pair of values) of GTIs.
 min_length (float) – An optional minimum length for the GTIs to be applied. Only GTIs longer than
min_length
will be considered when creating the mask.  return_new_gtis (bool) – If
True
, return the list of new GTIs (ifmin_length > 0
)  dt (float) – Time resolution of the data, i.e. the interval between time stamps
 epsilon (float) – fraction of
dt
that is tolerated at the borders of a GTI
Returns:  mask (bool array) – A mask labelling all time stamps that are included in the GTIs versus those that are not.
 new_gtis (Nx2 array) – An array of new GTIs created by this function

stingray.gti.
create_gti_from_condition
(time, condition, safe_interval=0, dt=None)[source]¶ Create a GTI list from a time array and a boolean mask (
condition
).Parameters:  time (arraylike) – Array containing time stamps
 condition (arraylike) – An array of bools, of the same length of time.
A possible condition can be, e.g., the result of
lc > 0
.
Returns: gtis – The newly created GTIs
Return type: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
Other Parameters:  safe_interval (float or
[float, float]
) – A safe interval to exclude at both ends (if single float) or the start and the end (if pair of values) of GTIs.  dt (float) – The width (in sec) of each bin of the time array. Can be irregular.

stingray.gti.
cross_two_gtis
(gti0, gti1)[source]¶ Extract the common intervals from two GTI lists EXACTLY.
Parameters:  gti0 (iterable of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
) –  gti1 (iterable of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
) – The two lists of GTIs to be crossed.
Returns: gtis – The newly created GTIs
Return type: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
See also
cross_gtis()
 From multiple GTI lists, extract common intervals EXACTLY
Examples
>>> gti1 = np.array([[1, 2]]) >>> gti2 = np.array([[1, 2]]) >>> newgti = cross_gtis([gti1, gti2]) >>> np.all(newgti == [[1, 2]]) True >>> gti1 = np.array([[1, 4]]) >>> gti2 = np.array([[1, 2], [2, 4]]) >>> newgti = cross_gtis([gti1, gti2]) >>> np.all(newgti == [[1, 4]]) True
 gti0 (iterable of the form

stingray.gti.
cross_gtis
(gti_list)[source]¶ From multiple GTI lists, extract the common intervals EXACTLY.
Parameters: gti_list (arraylike) – List of GTI arrays, each one in the usual format [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
Returns: gti0 – [[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
The newly created GTIsReturn type: 2d float array See also
cross_two_gtis()
 Extract the common intervals from two GTI lists EXACTLY

stingray.gti.
get_btis
(gtis, start_time=None, stop_time=None)[source]¶ From GTIs, obtain bad time intervals, i.e. the intervals not covered by the GTIs.
GTIs have to be wellbehaved, in the sense that they have to pass
check_gtis
.Parameters: Returns: btis – A list of bad time intervals
Return type: numpy.ndarray

stingray.gti.
gti_len
(gti)[source]¶ Return the total good time from a list of GTIs.
Parameters: gti (iterable) – A list of Good Time Intervals Returns: gti_len – The sum of lengths of all GTIs Return type: float

stingray.gti.
check_separate
(gti0, gti1)[source]¶ Check if two GTIs do not overlap.
Parameters:  gti0 (2d float array) – List of GTIs of form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
 gti1 (2d float array) – List of GTIs of form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
Returns: separate –
True
if GTIs are mutually exclusive,False
if notReturn type:  gti0 (2d float array) – List of GTIs of form

stingray.gti.
append_gtis
(gti0, gti1)[source]¶ Union of two nonoverlapping GTIs.
If the two GTIs “touch”, this is tolerated and the touching GTIs are joined in a single one.
Parameters:  gti0 (2d float array) – List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
 gti1 (2d float array) – List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
Returns: gti – The newly created GTI
Return type: 2d float array
Examples
>>> np.all(append_gtis([[0, 1]], [[2, 3]]) == [[0, 1], [2, 3]]) True >>> np.all(append_gtis([[0, 1]], [[1, 3]]) == [[0, 3]]) True
 gti0 (2d float array) – List of GTIs of the form

stingray.gti.
join_gtis
(gti0, gti1)[source]¶ Union of two GTIs.
If GTIs are mutually exclusive, it calls
append_gtis
. Otherwise we put the extremes of partially overlapping GTIs on an ideal line and look at the number of opened and closed intervals. When the number of closed and opened intervals is the same, the full GTI is complete and we close it.In practice, we assign to each opening time of a GTI the value
1
, and the value1
to each closing time; when the cumulative sum is zero, the GTI has ended. The timestamp after each closed GTI is the start of a new one.(cumsum) 1 2 1 0 1 2 1 2 1 0 GTI A : : : : FINAL GTI : :: GTI B  
Parameters:  gti0 (2d float array) – List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
 gti1 (2d float array) – List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
Returns: gti – The newly created GTI
Return type: 2d float array
 gti0 (2d float array) – List of GTIs of the form

stingray.gti.
time_intervals_from_gtis
(gtis, chunk_length, fraction_step=1, epsilon=1e05)[source]¶ Compute start/stop times of equal time intervals, compatible with GTIs.
Used to start each FFT/PDS/cospectrum from the start of a GTI, and stop before the next gap in data (end of GTI).
Parameters:  gtis (2d float array) – List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
 chunk_length (float) – Length of the time segments
 fraction_step (float) – If the step is not a full
chunk_length
but less (e.g. a moving window), this indicates the ratio between step step andchunk_length
(e.g. 0.5 means that the window shifts of halfchunk_length
)
Returns:  spectrum_start_times (arraylike) – List of starting times to use in the spectral calculations.
 spectrum_stop_times (arraylike) – List of end times to use in the spectral calculations.
 gtis (2d float array) – List of GTIs of the form

stingray.gti.
bin_intervals_from_gtis
(gtis, chunk_length, time, dt=None, fraction_step=1, epsilon=0.001)[source]¶ Compute start/stop times of equal time intervals, compatible with GTIs, and map them to the indices of an array of time stamps.
Used to start each FFT/PDS/cospectrum from the start of a GTI, and stop before the next gap in data (end of GTI). In this case, it is necessary to specify the time array containing the times of the light curve bins. Returns start and stop bins of the intervals to use for the PDS
Parameters:  gtis (2d float array) – List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
 chunk_length (float) – Length of each time segment
 time (arraylike) – Array of time stamps
Other Parameters:  dt (float, default median(diff(time))) – Time resolution of the light curve.
 epsilon (float, default 0.001) – The tolerance, in fraction of
dt
, for the comparisons at the borders  fraction_step (float) – If the step is not a full
chunk_length
but less (e.g. a moving window), this indicates the ratio between step step andchunk_length
(e.g.0.5
means that the window shifts of halfchunk_length
)
Returns:  spectrum_start_bins (arraylike) – List of starting bins in the original time array to use in spectral calculations.
 spectrum_stop_bins (arraylike) – List of end bins to use in the spectral calculations.
Examples
>>> time = np.arange(0.5, 13.5)
>>> gtis = [[0, 5], [6, 8]]
>>> chunk_length = 2
>>> start_bins, stop_bins = bin_intervals_from_gtis(gtis,chunk_length,time)
>>> np.all(start_bins == [0, 2, 6]) True >>> np.all(stop_bins == [2, 4, 8]) True >>> np.all(time[start_bins[0]:stop_bins[0]] == [0.5, 1.5]) True >>> np.all(time[start_bins[1]:stop_bins[1]] == [2.5, 3.5]) True
 gtis (2d float array) – List of GTIs of the form

stingray.gti.
gti_border_bins
(gtis, time, dt=None, epsilon=0.001)[source]¶ Find the indices in a time array corresponding to the borders of GTIs.
GTIs shorter than the bin time are not returned.
Parameters:  gtis (2d float array) – List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
 time (arraylike) – Time stamps of light curve bins
Returns:  spectrum_start_bins (arraylike) – List of starting bins of each GTI
 spectrum_stop_bins (arraylike) – List of stop bins of each GTI. The elements corresponding to these bins should not be included.
Examples
>>> times = np.arange(0.5, 13.5)
>>> start_bins, stop_bins = gti_border_bins( ... [[0, 5], [6, 8]], times)
>>> np.all(start_bins == [0, 6]) True >>> np.all(stop_bins == [5, 8]) True >>> np.all(times[start_bins[0]:stop_bins[0]] == [ 0.5, 1.5, 2.5, 3.5, 4.5]) True >>> np.all(times[start_bins[1]:stop_bins[1]] == [6.5, 7.5]) True
 gtis (2d float array) – List of GTIs of the form
I/O Functionality¶

stingray.io.
common_name
(str1, str2, default='common')[source]¶ Strip two strings of the letters not in common.
Filenames must be of same length and only differ by a few letters.
Parameters: Other Parameters: default (str) – The string to return if
common_str
is emptyReturns: common_str – A string containing the parts of the two names in common
Return type:

stingray.io.
high_precision_keyword_read
(hdr, keyword)[source]¶ Read FITS header keywords, also if split in two.
In the case where the keyword is split in two, like
MJDREF = MJDREFI + MJDREFFin some missions, this function returns the summed value. Otherwise, the content of the single keyword
Parameters:  hdr (dict_like) – The FITS header structure, or a dictionary
 keyword (str) – The key to read in the header
Returns: value – The value of the key, or
None
if something went wrongReturn type: long double

stingray.io.
load_events_and_gtis
(fits_file, additional_columns=None, gtistring='GTI, STDGTI', gti_file=None, hduname='EVENTS', column='TIME')[source]¶ Load event lists and GTIs from one or more files.
Loads event list from HDU EVENTS of file
fits_file
, with Good Time intervals. Optionally, returns additional columns of data from the same HDU of the events.Parameters: Returns:  ev_list (arraylike) – An array of time stamps of events
 gtis (list of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
) – Good Time Intervals  additional_data (dict) – A dictionary, where each key is the one specified in
additional_colums
. The data are an array with the values of the specified column in the fits file.  t_start, t_stop (float) – Start and stop times of the observation

stingray.io.
mkdir_p
(path)[source]¶ Safe
mkdir
function, found at [somkdir].Parameters: path (str) – The absolute path to the directory to be created Notes
[somkdir] http://stackoverflow.com/questions/600268/mkdirpfunctionalityinpython

stingray.io.
read
(filename, format_='pickle', **kwargs)[source]¶ Return a saved class instance.
Parameters: Returns: data –
 If
format_
ispickle
, an object is returned.  If
format_
isascii
, astropy.table object is returned.  If
format_
ishdf5
or ‘fits``, a dictionary object is returned.
Return type: {
object
astropy.table
dict
} If

stingray.io.
read_header_key
(fits_file, key, hdu=1)[source]¶ Read the header key key from HDU hdu of the file
fits_file
.Parameters: Other Parameters: hdu (int) – Index of the HDU extension from which the header key to be read.
Returns: value – The value stored under
key
infits_file
Return type:

stingray.io.
ref_mjd
(fits_file, hdu=1)[source]¶ Read
MJDREFF
,MJDREFI
or, if failed,MJDREF
, from the FITS header.Parameters: fits_file (str) – The file name and absolute path to the event file. Other Parameters: hdu (int) – Index of the HDU extension from which the header key to be read. Returns: mjdref – the reference MJD Return type: numpy.longdouble

stingray.io.
savefig
(filename, **kwargs)[source]¶ Save a figure plotted by
matplotlib
.Note : This function is supposed to be used after the
plot
function. Otherwise it will save a blank image with no plot.Parameters:  filename (str) – The name of the image file. Extension must be specified in the
file name. For example filename with .png extension will give a
rasterized image while
.pdf
extension will give a vectorized output.  kwargs (keyword arguments) – Keyword arguments to be passed to
savefig
function ofmatplotlib.pyplot
. For example use bbox_inches=’tight’ to remove the undesirable whitepace around the image.
 filename (str) – The name of the image file. Extension must be specified in the
file name. For example filename with .png extension will give a
rasterized image while

stingray.io.
split_numbers
(number)[source]¶ Split high precision number(s) into doubles. TODO: Consider the option of using a third number to specify shift.
Parameters: number (long double) – The input high precision number which is to be split Returns:  number_I (double) – First part of high precision number
 number_F (double) – Second part of high precision number
Other Utility Functions¶

stingray.utils.
simon
(message, **kwargs)[source]¶ The Statistical Interpretation MONitor.
A warning system designed to always remind the user that Simon is watching him/her.
Parameters:  message (string) – The message that is thrown
 kwargs (dict) – The rest of the arguments that are passed to
warnings.warn

stingray.utils.
rebin_data
(x, y, dx_new, yerr=None, method='sum', dx=None)[source]¶ Rebin some data to an arbitrary new data resolution. Either sum the data points in the new bins or average them.
Parameters:  x (iterable) – The dependent variable with some resolution
dx_old = x[1]x[0]
 y (iterable) – The independent variable to be binned
 dx_new (float) – The new resolution of the dependent variable
x
Other Parameters:  yerr (iterable, optional) – The uncertainties of
y
, to be propagated during binning.  method ({
sum
average
mean
}, optional, defaultsum
) – The method to be used in binning. Either sum the samplesy
in each new bin ofx
, or take the arithmetic mean.  dx (float) – The old resolution (otherwise, calculated from median diff)
Returns:  xbin (numpy.ndarray) – The midpoints of the new bins in
x
 ybin (numpy.ndarray) – The binned quantity
y
 ybin_err (numpy.ndarray) – The uncertainties of the binned values of
y
.  step_size (float) – The size of the binning step
 x (iterable) – The dependent variable with some resolution

stingray.utils.
rebin_data_log
(x, y, f, y_err=None, dx=None)[source]¶ Logarithmic rebin of some data. Particularly useful for the power spectrum.
The new dependent variable depends on the previous dependent variable modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\]Parameters:  x (iterable) – The dependent variable with some resolution
dx_old = x[1]x[0]
 y (iterable) – The independent variable to be binned
 f (float) – The factor of increase of each bin wrt the previous one.
Other Parameters:  yerr (iterable, optional) – The uncertainties of
y
to be propagated during binning.  method ({
sum
average
mean
}, optional, defaultsum
) – The method to be used in binning. Either sum the samplesy
in each new bin ofx
or take the arithmetic mean.  dx (float, optional) – The binning step of the initial
x
Returns:  xbin (numpy.ndarray) – The midpoints of the new bins in
x
 ybin (numpy.ndarray) – The binned quantity
y
 ybin_err (numpy.ndarray) – The uncertainties of the binned values of
y
 step_size (float) – The size of the binning step
 x (iterable) – The dependent variable with some resolution

stingray.utils.
look_for_array_in_array
(array1, array2)[source]¶ Find a subset of values in an array.
Parameters:  array1 (iterable) – An array with values to be searched
 array2 (iterable) – A second array which potentially contains a subset of values
also contained in
array1
  array3 (Returns) –
 in both array1 and array2 (contained) –

stingray.utils.
is_string
(s)[source]¶ Portable function to answer whether a variable is a string.
Parameters: s (object) – An object that is potentially a string Returns: isstring – A boolean decision on whether s
is a string or notReturn type: bool

stingray.utils.
is_iterable
(var)[source]¶ Test if a variable is an iterable.
Parameters: var (object) – The variable to be tested for iterablyness Returns: is_iter – Returns True
ifvar
is anIterable
,False
otherwiseReturn type: bool

stingray.utils.
order_list_of_arrays
(data, order)[source]¶ Sort an array according to the specified order.
Parameters: data (iterable) – Returns: data Return type: list or dict

stingray.utils.
optimal_bin_time
(fftlen, tbin)[source]¶ Vary slightly the bin time to have a power of two number of bins.
Given an FFT length and a proposed bin time, return a bin time slightly shorter than the original, that will produce a poweroftwo number of FFT bins.
Parameters: Returns: res – A time resolution that will produce a Fourier spectrum with
fftlen
frequencies and a number of FFT bins that are a power of twoReturn type:

stingray.utils.
contiguous_regions
(condition)[source]¶ Find contiguous
True
regions of the boolean arraycondition
.Return a 2D array where the first column is the start index of the region and the second column is the end index, found on [socontiguous].
Parameters: condition (bool array) – Returns: idx – A list of integer couples, with the start and end of each True
blocks in the original arrayReturn type: [[i0_0, i0_1], [i1_0, i1_1], ...]
Notes
[socontiguous] http://stackoverflow.com/questions/4494404/findlargenumberofconsecutivevaluesfulfillingconditioninanumpyarray

stingray.utils.
get_random_state
(random_state=None)[source]¶ Return a Mersenne Twister pseudorandom number generator.
Parameters: seed (integer or numpy.random.RandomState
, optional, defaultNone
) –Returns: random_state Return type: mtrand.RandomState object

stingray.utils.
baseline_als
(x, y, lam=None, p=None, niter=10, return_baseline=False, offset_correction=False)[source]¶ Baseline Correction with Asymmetric Least Squares Smoothing.
Parameters:  x (arraylike) – the sample time/number/position
 y (arraylike) – the data series corresponding to
x
 lam (float) – the lambda parameter of the ALS method. This control how much the baseline can adapt to local changes. A higher value corresponds to a stiffer baseline
 p (float) – the asymmetry parameter of the ALS method. This controls the overall slope tolerated for the baseline. A higher value correspond to a higher possible slope
Other Parameters:  niter (int) – The number of iterations to perform
 return_baseline (bool) – return the baseline?
 offset_correction (bool) – also correct for an offset to align with the running mean of the scan
Returns:  y_subtracted (arraylike, same size as
y
) – The initial time series, subtracted from the trend  baseline (arraylike, same size as
y
) – Fitted baseline. Only returned if return_baseline isTrue
Examples
>>> x = np.arange(0, 10, 0.01) >>> y = np.zeros_like(x) + 10 >>> ysub = baseline_als(x, y) >>> np.all(ysub < 0.001) True

stingray.utils.
excess_variance
(lc, normalization='fvar')[source]¶ Calculate the excess variance.
Vaughan et al. 2003, MNRAS 345, 1271 give three measurements of source intrinsic variance: if a light curve has a total variance of \(S^2\), and each point has an error bar \(\sigma_{err}\), the excess variance is defined as
\[\sigma_{XS} = S^2  \overline{\sigma_{err}}^2;\]the normalized excess variance is the excess variance divided by the square of the mean intensity:
\[\sigma_{NXS} = \dfrac{\sigma_{XS}}{\overline{x}^2};\]the fractional mean square variability amplitude, or \(F_{var}\), is finally defined as
\[F_{var} = \sqrt{\dfrac{\sigma_{XS}}{\overline{x}^2}}\]Parameters:  lc (a
Lightcurve
object) –  normalization (str) – if
fvar
, return the fractional mean square variability \(F_{var}\). Ifnone
, return the unnormalized excess variance variance \(\sigma_{XS}\). Ifnorm_xs
, return the normalized excess variance \(\sigma_{XS}\)
Returns:  var_xs (float)
 var_xs_err (float)
 lc (a

stingray.utils.
create_window
(N, window_type='uniform')[source]¶ A method to create window functions commonly used in signal processing.
Windows supported are: Hamming, Hanning, uniform (rectangular window), triangular window, blackmann window among others.
Parameters:  N (int) – Total number of data points in window. If negative, abs is taken.
 window_type ({
uniform
,parzen
,hamming
,hanning
,triangular
,welch
,blackmann
,flattop
}, optional, defaultuniform
) – Type of window to create.
Returns: window – Window function of length
N
.Return type: numpy.ndarray

stingray.utils.
poisson_symmetrical_errors
(counts)[source]¶ Optimized version of frequentist symmetrical errors.
Uses a lookup table in order to limit the calls to poisson_conf_interval
Parameters: counts (iterable) – An array of Poissondistributed numbers Returns: err – An array of uncertainties associated with the Poisson counts in counts
Return type: numpy.ndarray Examples
>>> from astropy.stats import poisson_conf_interval >>> counts = np.random.randint(0, 1000, 100) >>> #  Do it without the lookup table  >>> err_low, err_high = poisson_conf_interval(np.asarray(counts), ... interval='frequentistconfidence', sigma=1) >>> err_low = np.asarray(counts) >>> err_high = np.asarray(counts) >>> err = (np.absolute(err_low) + np.absolute(err_high))/2.0 >>> # Do it with this function >>> err_thisfun = poisson_symmetrical_errors(counts) >>> # Test that results are always the same >>> assert np.all(err_thisfun == err)

stingray.utils.
standard_error
(xs, mean)[source]¶ Return the standard error of the mean (SEM) of an array of arrays.
Parameters:  xs (2d float array) – List of data point arrays.
 mean (1d float array) – Average of the data points.
Returns: standard_error – Standard error of the mean (SEM).
Return type: 1d float array

stingray.utils.
nearest_power_of_two
(x)[source]¶ Return a number which is nearest to x and is the integral power of two.
Parameters: x (int, float) – Returns: x_nearest – Number closest to x and is the integral power of two. Return type: int

stingray.utils.
find_nearest
(array, value)[source]¶ Return the array value that is closest to the input value (Abigail Stevens: Thanks StackOverflow!)
Parameters: Returns:  array[idx] (int or float) – The array value that is closest to the input value.
 idx (int) – The index of the array of the closest value.
Modeling¶
This subpackage defines classes and functions related to parametric modelling of various types of data sets. Currently, most functionality is focused on modelling Fourier products (especially power spectra and averaged power spectra), but rudimentary functionality exists for modelling e.g. light curves.
LogLikelihood Classes¶
These classes define basic loglikelihoods for modelling time series and power spectra.
stingray.modeling.LogLikelihood
is an abstract base class, i.e. a template for creating
userdefined loglikelihoods and should not be instantiated itself. Based on this base class
are several definitions for a stingray.modeling.GaussianLogLikelihood
, appropriate for
data with normally distributed uncertainties, a stingray.modeling.PoissonLogLikelihood
appropriate for photon counting data, and a stingray.modeling.PSDLogLikelihood
appropriate for (averaged) power spectra.

class
stingray.modeling.
LogLikelihood
(x, y, model, **kwargs)[source]¶ Abstract Base Class defining the structure of a
LogLikelihood
object. This class cannot be called itself, since each statistical distribution has its own definition for the likelihood, which should occur in subclasses.Parameters:  x (iterable) – xcoordinate of the data. Could be multidimensional.
 y (iterable) – ycoordinate of the data. Could be multidimensional.
 model (an
astropy.modeling.FittableModel
instance) – Your model  kwargs – keyword arguments specific to the individual subclasses. For details, see the respective docstrings for each subclass

class
stingray.modeling.
GaussianLogLikelihood
(x, y, yerr, model)[source]¶ Likelihood for data with Gaussian uncertainties. Astronomers also call this likelihood ChiSquared, but be aware that this has nothing to do with the likelihood based on the Chisquare distribution, which is also defined as in of
PSDLogLikelihood
in this module!Use this class here whenever your data has Gaussian uncertainties.
Parameters:  x (iterable) – xcoordinate of the data
 y (iterable) – ycoordinte of the data
 yerr (iterable) – the uncertainty on the data, as standard deviation
 model (an
astropy.modeling.FittableModel
instance) – The model to use in the likelihood.

x
¶ xcoordinate of the data
Type: iterable

y
¶ ycoordinte of the data
Type: iterable

yerr
¶ the uncertainty on the data, as standard deviation
Type: iterable

model
¶ The model to use in the likelihood.
Type: an Astropy Model instance

evaluate
(pars, neg=False)[source]¶ Evaluate the Gaussian loglikelihood for a given set of parameters.
Parameters:  pars (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the loglikelihood. Note that the
length of this array must match the free parameters in
model
, i.e.npar
 neg (bool, optional, default
False
) – IfTrue
, return the negative loglikelihood, i.e.loglike
, rather thanloglike
. This is useful e.g. for optimization routines, which generally minimize functions.
Returns: loglike – The log(likelihood) value for the data and model.
Return type:  pars (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the loglikelihood. Note that the
length of this array must match the free parameters in

class
stingray.modeling.
PoissonLogLikelihood
(x, y, model)[source]¶ Likelihood for data with uncertainties following a Poisson distribution. This is useful e.g. for (binned) photon count data.
Parameters:  x (iterable) – xcoordinate of the data
 y (iterable) – ycoordinte of the data
 model (an
astropy.modeling.FittableModel
instance) – The model to use in the likelihood.

x
¶ xcoordinate of the data
Type: iterable

y
¶ ycoordinte of the data
Type: iterable

yerr
¶ the uncertainty on the data, as standard deviation
Type: iterable

model
¶ The model to use in the likelihood.
Type: an astropy.modeling.FittableModel
instance

evaluate
(pars, neg=False)[source]¶ Evaluate the loglikelihood for a given set of parameters.
Parameters:  pars (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the loglikelihood. Note that the
length of this array must match the free parameters in
model
, i.e.npar
 neg (bool, optional, default
False
) – IfTrue
, return the negative loglikelihood, i.e.loglike
, rather thanloglike
. This is useful e.g. for optimization routines, which generally minimize functions.
Returns: loglike – The log(likelihood) value for the data and model.
Return type:  pars (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the loglikelihood. Note that the
length of this array must match the free parameters in

class
stingray.modeling.
PSDLogLikelihood
(freq, power, model, m=1)[source]¶ A likelihood based on the Chisquare distribution, appropriate for modelling (averaged) power spectra. Note that this is not the same as the statistic astronomers commonly call ChiSquare, which is a fit statistic derived from the Gaussian loglikelihood, defined elsewhere in this module.
Parameters:  freq (iterable) – Array with frequencies
 power (iterable) – Array with (averaged/singular) powers corresponding to the
frequencies in
freq
 model (an
astropy.modeling.FittableModel
instance) – The model to use in the likelihood.  m (int) – 1/2 of the degrees of freedom

x
¶ xcoordinate of the data
Type: iterable

y
¶ ycoordinte of the data
Type: iterable

yerr
¶ the uncertainty on the data, as standard deviation
Type: iterable

model
¶ The model to use in the likelihood.
Type: an astropy.modeling.FittableModel
instance

evaluate
(pars, neg=False)[source]¶ Evaluate the loglikelihood for a given set of parameters.
Parameters:  pars (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the loglikelihood. Note that the
length of this array must match the free parameters in
model
, i.e.npar
 neg (bool, optional, default
False
) – IfTrue
, return the negative loglikelihood, i.e.loglike
, rather thanloglike
. This is useful e.g. for optimization routines, which generally minimize functions.
Returns: loglike – The log(likelihood) value for the data and model.
Return type:  pars (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the loglikelihood. Note that the
length of this array must match the free parameters in

class
stingray.modeling.
LaplaceLogLikelihood
(x, y, yerr, model)[source]¶ A Laplace likelihood for the cospectrum.
Parameters:  x (iterable) – Array with independent variable
 y (iterable) – Array with dependent variable
 model (an
astropy.modeling.FittableModel
instance) – The model to use in the likelihood.  yerr (iterable) – Array with the uncertainties on
y
, in standard deviation

x
¶ xcoordinate of the data
Type: iterable

y
¶ ycoordinte of the data
Type: iterable

yerr
¶ the uncertainty on the data, as standard deviation
Type: iterable

model
¶ The model to use in the likelihood.
Type: an astropy.modeling.FittableModel
instance

evaluate
(pars, neg=False)[source]¶ Evaluate the loglikelihood for a given set of parameters.
Parameters:  pars (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the loglikelihood. Note that the
length of this array must match the free parameters in
model
, i.e.npar
 neg (bool, optional, default
False
) – IfTrue
, return the negative loglikelihood, i.e.loglike
, rather thanloglike
. This is useful e.g. for optimization routines, which generally minimize functions.
Returns: loglike – The log(likelihood) value for the data and model.
Return type:  pars (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the loglikelihood. Note that the
length of this array must match the free parameters in
Posterior Classes¶
These classes define basic posteriors for parametric modelling of time series and power spectra, based on
the loglikelihood classes defined in LogLikelihood Classes. stingray.modeling.Posterior
is an
abstract base class laying out a basic template for defining posteriors. As with the loglikelihood classes
above, several posterior classes are defined for a variety of data types.
Note that priors are not predefined in these classes, since they are problem dependent and should be
set by the user. The convenience function stingray.modeling.set_logprior()
can be useful to help set
priors for these posterior classes.

class
stingray.modeling.
Posterior
(x, y, model, **kwargs)[source]¶ Define a
Posterior
object.The
Posterior
describes the Bayesian probability distribution of a set of parameters \(\theta\) given some observed data \(D\) and some prior assumptions \(I\).It is defined as
\[p(\theta  D, I) = p(D  \theta, I) p(\theta  I)/p(D I)\]where \(p(D  \theta, I)\) describes the likelihood, i.e. the sampling distribution of the data and the (parametric) model, and \(p(\theta  I)\) describes the prior distribution, i.e. our information about the parameters \(\theta\) before we gathered the data. The marginal likelihood \(p(D I)\) describes the probability of observing the data given the model assumptions, integrated over the space of all parameters.
Parameters:  x (iterable) – The abscissa or independent variable of the data. This could in principle be a multidimensional array.
 y (iterable) – The ordinate or dependent variable of the data.
 model (
astropy.modeling.models
instance) – The parametric model supposed to represent the data. For details see theastropy.modeling
documentation  kwargs – keyword arguments related to the subclasses of
Posterior
. For details, see the documentation of the individual subclasses
References
 Sivia, D. S., and J. Skilling. “Data Analysis: A Bayesian Tutorial. 2006.”
 Gelman, Andrew, et al. Bayesian data analysis. Vol. 2. Boca Raton, FL, USA: Chapman & Hall/CRC, 2014.
 von Toussaint, Udo. “Bayesian inference in physics.” Reviews of Modern Physics 83.3 (2011): 943.
 Hogg, David W. “Probability Calculus for inference”. arxiv: 1205.4446

logposterior
(t0, neg=False)[source]¶ Definition of the logposterior. Requires methods
loglikelihood
andlogprior
to both be defined.Note that
loglikelihood
is set in the subclass ofPosterior
appropriate for your problem at hand, as islogprior
.Parameters:  t0 (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the logposterior. Note that the
length of this array must match the free parameters in
model
, i.e.npar
 neg (bool, optional, default
False
) – IfTrue
, return the negative logposterior, i.e.lpost
, rather thanlpost
. This is useful e.g. for optimization routines, which generally minimize functions.
Returns: lpost – The value of the logposterior for the given parameters
t0
Return type:  t0 (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the logposterior. Note that the
length of this array must match the free parameters in

class
stingray.modeling.
GaussianPosterior
(x, y, yerr, model, priors=None)[source]¶ A general class for twodimensional data following a Gaussian sampling distribution.
Parameters:  x (numpy.ndarray) – independent variable
 y (numpy.ndarray) – dependent variable
 yerr (numpy.ndarray) – measurement uncertainties for y
 model (instance of any subclass of
astropy.modeling.FittableModel
) – The model for the power spectrum.  priors (dict of form
{"parameter name": function}
, optional) – A dictionary with the definitions for the prior probabilities. For each parameter inmodel
, there must be a prior defined with a key of the exact same name as stored inmodel.param_names
. The item for each key is a function definition defining the prior (e.g. a lambda function or ascipy.stats.distribution.pdf
. Ifpriors = None
, then no prior is set. This means priors need to be added by hand using theset_logprior()
function defined in this module. Note that it is impossible to call aPosterior
object itself or theself.logposterior
method without defining a prior.

logposterior
(t0, neg=False)¶ Definition of the logposterior. Requires methods
loglikelihood
andlogprior
to both be defined.Note that
loglikelihood
is set in the subclass ofPosterior
appropriate for your problem at hand, as islogprior
.Parameters:  t0 (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the logposterior. Note that the
length of this array must match the free parameters in
model
, i.e.npar
 neg (bool, optional, default
False
) – IfTrue
, return the negative logposterior, i.e.lpost
, rather thanlpost
. This is useful e.g. for optimization routines, which generally minimize functions.
Returns: lpost – The value of the logposterior for the given parameters
t0
Return type:  t0 (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the logposterior. Note that the
length of this array must match the free parameters in

class
stingray.modeling.
PoissonPosterior
(x, y, model, priors=None)[source]¶ Posterior
for Poisson light curve data. Primary intended use is for modelling Xray light curves, but alternative uses are conceivable.Parameters:  x (numpy.ndarray) – The independent variable (e.g. time stamps of a light curve)
 y (numpy.ndarray) – The dependent variable (e.g. counts per bin of a light curve)
 model (instance of any subclass of
astropy.modeling.FittableModel
) – The model for the power spectrum.  priors (dict of form
{"parameter name": function}
, optional) – A dictionary with the definitions for the prior probabilities. For each parameter inmodel
, there must be a prior defined with a key of the exact same name as stored inmodel.param_names
. The item for each key is a function definition defining the prior (e.g. a lambda function or ascipy.stats.distribution.pdf
. Ifpriors = None
, then no prior is set. This means priors need to be added by hand using theset_logprior()
function defined in this module. Note that it is impossible to call aPosterior
object itself or theself.logposterior
method without defining a prior.

x
¶ The independent variable (list of frequencies) stored in ps.freq
Type: numpy.ndarray

y
¶ The dependent variable (list of powers) stored in ps.power
Type: numpy.ndarray

model
¶ The model for the power spectrum.
Type: instance of any subclass of astropy.modeling.FittableModel

logposterior
(t0, neg=False)¶ Definition of the logposterior. Requires methods
loglikelihood
andlogprior
to both be defined.Note that
loglikelihood
is set in the subclass ofPosterior
appropriate for your problem at hand, as islogprior
.Parameters:  t0 (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the logposterior. Note that the
length of this array must match the free parameters in
model
, i.e.npar
 neg (bool, optional, default
False
) – IfTrue
, return the negative logposterior, i.e.lpost
, rather thanlpost
. This is useful e.g. for optimization routines, which generally minimize functions.
Returns: lpost – The value of the logposterior for the given parameters
t0
Return type:  t0 (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the logposterior. Note that the
length of this array must match the free parameters in

class
stingray.modeling.
PSDPosterior
(freq, power, model, priors=None, m=1)[source]¶ Posterior
distribution for power spectra. Uses an exponential distribution for the errors in the likelihood, or a \(\chi^2\) distribution with \(2M\) degrees of freedom, where \(M\) is the number of frequency bins or power spectra averaged in each bin.Parameters:  ps ({
stingray.Powerspectrum
stingray.AveragedPowerspectrum
} instance) – thestingray.Powerspectrum
object containing the data  model (instance of any subclass of
astropy.modeling.FittableModel
) – The model for the power spectrum.  priors (dict of form
{"parameter name": function}
, optional) – A dictionary with the definitions for the prior probabilities. For each parameter inmodel
, there must be a prior defined with a key of the exact same name as stored inmodel.param_names
. The item for each key is a function definition defining the prior (e.g. a lambda function or ascipy.stats.distribution.pdf
. Ifpriors = None
, then no prior is set. This means priors need to be added by hand using theset_logprior()
function defined in this module. Note that it is impossible to call aPosterior
object itself or theself.logposterior
method without defining a prior.  m (int, default
1
) – The number of averaged periodograms or frequency bins inps
. Useful for binned/averaged periodograms, since the value of m will change the likelihood function!

ps
¶ the
stingray.Powerspectrum
object containing the dataType: { stingray.Powerspectrum
stingray.AveragedPowerspectrum
} instance

x
¶ The independent variable (list of frequencies) stored in
ps.freq
Type: numpy.ndarray

y
¶ The dependent variable (list of powers) stored in
ps.power
Type: numpy.ndarray

model
¶ The model for the power spectrum.
Type: instance of any subclass of astropy.modeling.FittableModel

logposterior
(t0, neg=False)¶ Definition of the logposterior. Requires methods
loglikelihood
andlogprior
to both be defined.Note that
loglikelihood
is set in the subclass ofPosterior
appropriate for your problem at hand, as islogprior
.Parameters:  t0 (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the logposterior. Note that the
length of this array must match the free parameters in
model
, i.e.npar
 neg (bool, optional, default
False
) – IfTrue
, return the negative logposterior, i.e.lpost
, rather thanlpost
. This is useful e.g. for optimization routines, which generally minimize functions.
Returns: lpost – The value of the logposterior for the given parameters
t0
Return type:  t0 (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the logposterior. Note that the
length of this array must match the free parameters in
 ps ({

class
stingray.modeling.
LaplacePosterior
(x, y, yerr, model, priors=None)[source]¶ A general class for twodimensional data following a Gaussian sampling distribution.
Parameters:  x (numpy.ndarray) – independent variable
 y (numpy.ndarray) – dependent variable
 yerr (numpy.ndarray) – measurement uncertainties for y, in standard deviation
 model (instance of any subclass of
astropy.modeling.FittableModel
) – The model for the power spectrum.  priors (dict of form
{"parameter name": function}
, optional) – A dictionary with the definitions for the prior probabilities. For each parameter inmodel
, there must be a prior defined with a key of the exact same name as stored inmodel.param_names
. The item for each key is a function definition defining the prior (e.g. a lambda function or ascipy.stats.distribution.pdf
. Ifpriors = None
, then no prior is set. This means priors need to be added by hand using theset_logprior()
function defined in this module. Note that it is impossible to call aPosterior
object itself or theself.logposterior
method without defining a prior.

logposterior
(t0, neg=False)¶ Definition of the logposterior. Requires methods
loglikelihood
andlogprior
to both be defined.Note that
loglikelihood
is set in the subclass ofPosterior
appropriate for your problem at hand, as islogprior
.Parameters:  t0 (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the logposterior. Note that the
length of this array must match the free parameters in
model
, i.e.npar
 neg (bool, optional, default
False
) – IfTrue
, return the negative logposterior, i.e.lpost
, rather thanlpost
. This is useful e.g. for optimization routines, which generally minimize functions.
Returns: lpost – The value of the logposterior for the given parameters
t0
Return type:  t0 (numpy.ndarray) – An array of parameters at which to evaluate the model
and subsequently the logposterior. Note that the
length of this array must match the free parameters in
Parameter Estimation Classes¶
These classes implement functionality related to parameter estimation. They define basic fit
and
sample
methods using scipy.optimize
and emcee
, respectively, for optimization and Markov Chain Monte
Carlo sampling. stingray.modeling.PSDParEst
implements some more advanced functionality for modelling
power spectra, including both frequentist and Bayesian searches for (quasi)periodic signals.

class
stingray.modeling.
ParameterEstimation
(fitmethod='BFGS', max_post=True)[source]¶ Parameter estimation of twodimensional data, either via optimization or MCMC. Note: optimization with bounds is not supported. If something like this is required, define (uniform) priors in the ParametricModel instances to be used below.
Parameters:  fitmethod (string, optional, default
LBFGSB
) – Any of the strings allowed inscipy.optimize.minimize
in the method keyword. Sets the fit method to be used.  max_post (bool, optional, default
True
) – IfTrue
, then compute the MaximumAPosteriori estimate. IfFalse
, compute a Maximum Likelihood estimate.

calibrate_lrt
(lpost1, t1, lpost2, t2, sample=None, neg=True, max_post=False, nsim=1000, niter=200, nwalkers=500, burnin=200, namestr='test', seed=None)[source]¶ Calibrate the outcome of a Likelihood Ratio Test via MCMC.
In order to compare models via likelihood ratio test, one generally aims to compute a pvalue for the null hypothesis (generally the simpler model). There are two special cases where the theoretical distribution used to compute that pvalue analytically given the observed likelihood ratio (a chisquare distribution) is not applicable:
 the models are not nested (i.e. Model 1 is not a special, simpler case of Model 2),
 the parameter values fixed in Model 2 to retrieve Model 1 are at the edges of parameter space (e.g. if one must set, say, an amplitude to zero in order to remove a component in the more complex model, and negative amplitudes are excluded a priori)
In these cases, the observed likelihood ratio must be calibrated via simulations of the simpler model (Model 1), using MCMC to take into account the uncertainty in the parameters. This function does exactly that: it computes the likelihood ratio for the observed data, and produces simulations to calibrate the likelihood ratio and compute a pvalue for observing the data under the assumption that Model 1 istrue.
If
max_post=True
, the code will use MCMC to sample the posterior of the parameters and simulate fake data from there.If
max_post=False
, the code will use the covariance matrix derived from the fit to simulate data sets for comparison.Parameters:  lpost1 (object of a subclass of
Posterior
) – ThePosterior
object for model 1  t1 (iterable) – The starting parameters for model 1
 lpost2 (object of a subclass of
Posterior
) – ThePosterior
object for model 2  t2 (iterable) – The starting parameters for model 2
 neg (bool, optional, default
True
) – Boolean flag to decide whether to use the negative loglikelihood or logposterior  max_post (bool, optional, default
False
) – IfTrue
, set the internal state to do the optimization with the loglikelihood rather than the logposterior.
Returns: pvalue – pvalue ‘n stuff
Return type: float [0,1]

compute_lrt
(lpost1, t1, lpost2, t2, neg=True, max_post=False)[source]¶ This function computes the Likelihood Ratio Test between two nested models.
Parameters:  lpost1 (object of a subclass of
Posterior
) – ThePosterior
object for model 1  t1 (iterable) – The starting parameters for model 1
 lpost2 (object of a subclass of
Posterior
) – ThePosterior
object for model 2  t2 (iterable) – The starting parameters for model 2
 neg (bool, optional, default
True
) – Boolean flag to decide whether to use the negative loglikelihood or logposterior  max_post (bool, optional, default
False
) – IfTrue
, set the internal state to do the optimization with the loglikelihood rather than the logposterior.
Returns:  lrt (float) – The likelihood ratio for model 2 and model 1
 res1 (OptimizationResults object) – Contains the result of fitting
lpost1
 res2 (OptimizationResults object) – Contains the results of fitting
lpost2
 lpost1 (object of a subclass of

fit
(lpost, t0, neg=True, scipy_optimize_options=None)[source]¶ Do either a MaximumAPosteriori (MAP) or Maximum Likelihood (ML) fit to the data.
MAP fits include priors, ML fits do not.
Parameters:  lpost (
Posterior
(or subclass) instance) – and instance of classPosterior
or one of its subclasses that defines the function to be minimized (either inloglikelihood
orlogposterior
)  t0 ({
list
numpy.ndarray
}) – List/array with set of initial parameters  neg (bool, optional, default
True
) – Boolean to be passed tolpost
, setting whether to use the negative posterior or the negative loglikelihood. Useful for optimization routines, which are generally defined as minimization routines.  scipy_optimize_options (dict, optional, default
None
) – A dictionary with options forscipy.optimize.minimize
, directly passed on as keyword arguments.
Returns: res – An object containing useful summaries of the fitting procedure. For details, see documentation of class:OptimizationResults.
Return type: OptimizationResults
object lpost (

sample
(lpost, t0, cov=None, nwalkers=500, niter=100, burnin=100, threads=1, print_results=True, plot=False, namestr='test')[source]¶ Sample the
Posterior
distribution defined inlpost
using MCMC. Here we use theemcee
package, but other implementations could in principle be used.Parameters:  lpost (instance of a
Posterior
subclass) – and instance of classPosterior
or one of its subclasses that defines the function to be minimized (either inloglikelihood
orlogposterior
)  t0 (iterable) – list or array containing the starting parameters. Its length
must match
lpost.model.npar
.  nwalkers (int, optional, default 500) – The number of walkers (chains) to use during the MCMC procedure. The more walkers are used, the slower the estimation will be, but the better the final distribution is likely to be.
 niter (int, optional, default 100) – The number of iterations to run the MCMC chains for. The larger this number, the longer the estimation will take, but the higher the chance that the walkers have actually converged on the true posterior distribution.
 burnin (int, optional, default 100) – The number of iterations to run the walkers before convergence is assumed to have occurred. This part of the chain will be discarded before sampling from what is then assumed to be the posterior distribution desired.
 threads (int, optional, default 1) – The number of threads for parallelization.
Default is
1
, i.e. no parallelization  print_results (bool, optional, default
True
) – Boolean flag setting whether the results of the MCMC run should be printed to standard output. Default: True  plot (bool, optional, default
False
) – Boolean flag setting whether summary plots of the MCMC chains should be produced. Default: False  namestr (str, optional, default
test
) – Optional string for output file names for the plotting.
Returns: res – An object of class
SamplingResults
summarizing the results of the MCMC run.Return type: class:SamplingResults object
 lpost (instance of a
 fitmethod (string, optional, default

class
stingray.modeling.
PSDParEst
(ps, fitmethod='BFGS', max_post=True)[source]¶ Parameter estimation for parametric modelling of power spectra.
This class contains functionality that allows parameter estimation and related tasks that involve fitting a parametric model to an (averaged) power spectrum.
Parameters:  ps (class:stingray.Powerspectrum or class:stingray.AveragedPowerspectrum object) – The power spectrum to be modelled
 fitmethod (str, optional, default
BFGS
) – A string allowed byscipy.optimize.minimize
as a valid fitting method  max_post (bool, optional, default
True
) – IfTrue
, do a MaximumAPosteriori (MAP) fit, i.e. fit with priors, otherwise do a Maximum Likelihood fit instead

calibrate_highest_outlier
(lpost, t0, sample=None, max_post=False, nsim=1000, niter=200, nwalkers=500, burnin=200, namestr='test', seed=None)[source]¶ Calibrate the highest outlier in a data set using MCMCsimulated power spectra.
In short, the procedure does a MAP fit to the data, computes the statistic
\[\max{(T_R = 2(\mathrm{data}/\mathrm{model}))}\]and then does an MCMC run using the data and the model, or generates parameter samples from the likelihood distribution using the derived covariance in a Maximum Likelihood fit. From the (posterior) samples, it generates fake power spectra. Each fake spectrum is fit in the same way as the data, and the highest data/model outlier extracted as for the data. The observed value of \(T_R\) can then be directly compared to the simulated distribution of \(T_R\) values in order to derive a pvalue of the null hypothesis that the observed \(T_R\) is compatible with being generated by noise.
Parameters:  lpost (
stingray.modeling.PSDPosterior
object) – An instance of classstingray.modeling.PSDPosterior
that defines the function to be minimized (either inloglikelihood
orlogposterior
)  t0 ({list  numpy.ndarray}) – List/array with set of initial parameters
 sample (
SamplingResults
instance, optional, defaultNone
) – If a sampler has already been run, theSamplingResults
instance can be fed into this method here, otherwise this method will run a sampler automatically  max_post (bool, optional, default
False
) – IfTrue
, do MAP fits on the power spectrum to find the highest data/model outlier Otherwise, do a Maximum Likelihood fit. IfTrue
, the simulated power spectra will be generated from an MCMC run, otherwise the method will employ the approximated covariance matrix for the parameters derived from the likelihood surface to generate samples from that likelihood function.  nsim (int, optional, default
1000
) – Number of fake power spectra to simulate from the posterior sample. Note that this number sets the resolution of the resulting pvalue. Fornsim=1000
, the highest resolution that can be achieved is \(10^{3}\).  niter (int, optional, default 200) – If
sample
isNone
, this variable will be used to set the number of steps in the MCMC procedure after burnin.  nwalkers (int, optional, default 500) – If
sample
isNone
, this variable will be used to set the number of MCMC chains run in parallel in the sampler.  burnin (int, optional, default 200) – If
sample
isNone
, this variable will be used to set the number of burnin steps to be discarded in the initial phase of the MCMC run  namestr (str, optional, default
test
) – A string to be used for storing MCMC output and plots to disk  seed (int, optional, default
None
) – An optional number to seed the random number generator with, for reproducibility of the results obtained with this method.
Returns: pval – The pvalue that the highest data/model outlier is produced by random noise, calibrated using simulated power spectra from an MCMC run.
Return type: References
For more details on the procedure employed here, see
 Vaughan, 2010: https://arxiv.org/abs/0910.2706
 Huppenkothen et al, 2013: https://arxiv.org/abs/1212.1011
 lpost (

calibrate_lrt
(lpost1, t1, lpost2, t2, sample=None, neg=True, max_post=False, nsim=1000, niter=200, nwalkers=500, burnin=200, namestr='test', seed=None)¶ Calibrate the outcome of a Likelihood Ratio Test via MCMC.
In order to compare models via likelihood ratio test, one generally aims to compute a pvalue for the null hypothesis (generally the simpler model). There are two special cases where the theoretical distribution used to compute that pvalue analytically given the observed likelihood ratio (a chisquare distribution) is not applicable:
 the models are not nested (i.e. Model 1 is not a special, simpler case of Model 2),
 the parameter values fixed in Model 2 to retrieve Model 1 are at the edges of parameter space (e.g. if one must set, say, an amplitude to zero in order to remove a component in the more complex model, and negative amplitudes are excluded a priori)
In these cases, the observed likelihood ratio must be calibrated via simulations of the simpler model (Model 1), using MCMC to take into account the uncertainty in the parameters. This function does exactly that: it computes the likelihood ratio for the observed data, and produces simulations to calibrate the likelihood ratio and compute a pvalue for observing the data under the assumption that Model 1 istrue.
If
max_post=True
, the code will use MCMC to sample the posterior of the parameters and simulate fake data from there.If
max_post=False
, the code will use the covariance matrix derived from the fit to simulate data sets for comparison.Parameters:  lpost1 (object of a subclass of
Posterior
) – ThePosterior
object for model 1  t1 (iterable) – The starting parameters for model 1
 lpost2 (object of a subclass of
Posterior
) – ThePosterior
object for model 2  t2 (iterable) – The starting parameters for model 2
 neg (bool, optional, default
True
) – Boolean flag to decide whether to use the negative loglikelihood or logposterior  max_post (bool, optional, default
False
) – IfTrue
, set the internal state to do the optimization with the loglikelihood rather than the logposterior.
Returns: pvalue – pvalue ‘n stuff
Return type: float [0,1]

compute_lrt
(lpost1, t1, lpost2, t2, neg=True, max_post=False)¶ This function computes the Likelihood Ratio Test between two nested models.
Parameters:  lpost1 (object of a subclass of
Posterior
) – ThePosterior
object for model 1  t1 (iterable) – The starting parameters for model 1
 lpost2 (object of a subclass of
Posterior
) – ThePosterior
object for model 2  t2 (iterable) – The starting parameters for model 2
 neg (bool, optional, default
True
) – Boolean flag to decide whether to use the negative loglikelihood or logposterior  max_post (bool, optional, default
False
) – IfTrue
, set the internal state to do the optimization with the loglikelihood rather than the logposterior.
Returns:  lrt (float) – The likelihood ratio for model 2 and model 1
 res1 (OptimizationResults object) – Contains the result of fitting
lpost1
 res2 (OptimizationResults object) – Contains the results of fitting
lpost2
 lpost1 (object of a subclass of

fit
(lpost, t0, neg=True, scipy_optimize_options=None)[source]¶ Do either a MaximumAPosteriori (MAP) or Maximum Likelihood (ML) fit to the power spectrum.
MAP fits include priors, ML fits do not.
Parameters:  lpost (
stingray.modeling.PSDPosterior
object) – An instance of classstingray.modeling.PSDPosterior
that defines the function to be minimized (either inloglikelihood
orlogposterior
)  t0 ({list  numpy.ndarray}) – List/array with set of initial parameters
 neg (bool, optional, default
True
) – Boolean to be passed tolpost
, setting whether to use the negative posterior or the negative loglikelihood.  scipy_optimize_options (dict, optional, default None) – A dictionary with options for
scipy.optimize.minimize
, directly passed on as keyword arguments.
Returns: res – An object containing useful summaries of the fitting procedure. For details, see documentation of
OptimizationResults
.Return type: OptimizationResults
object lpost (

plotfits
(res1, res2=None, save_plot=False, namestr='test', log=False)[source]¶ Plotting method that allows to plot either one or two bestfit models with the data.
Plots a power spectrum with the bestfit model, as well as the data/model residuals for each model.
Parameters:  res1 (
OptimizationResults
object) – Output of a successful fitting procedure  res2 (
OptimizationResults
object, optional, defaultNone
) – Optional output of a second successful fitting procedure, e.g. with a competing model  save_plot (bool, optional, default
False
) – IfTrue
, the resulting figure will be saved to a file  namestr (str, optional, default
test
) – Ifsave_plot
isTrue
, this string defines the path and file name for the output plot  log (bool, optional, default
False
) – IfTrue
, plot the axes logarithmically.
 res1 (

sample
(lpost, t0, cov=None, nwalkers=500, niter=100, burnin=100, threads=1, print_results=True, plot=False, namestr='test')[source]¶ Sample the posterior distribution defined in
lpost
using MCMC. Here we use theemcee
package, but other implementations could in principle be used.Parameters:  lpost (instance of a
Posterior
subclass) – and instance of classPosterior
or one of its subclasses that defines the function to be minimized (either inloglikelihood
orlogposterior
)  t0 (iterable) – list or array containing the starting parameters. Its length
must match
lpost.model.npar
.  nwalkers (int, optional, default 500) – The number of walkers (chains) to use during the MCMC procedure. The more walkers are used, the slower the estimation will be, but the better the final distribution is likely to be.
 niter (int, optional, default 100) – The number of iterations to run the MCMC chains for. The larger this number, the longer the estimation will take, but the higher the chance that the walkers have actually converged on the true posterior distribution.
 burnin (int, optional, default 100) – The number of iterations to run the walkers before convergence is assumed to have occurred. This part of the chain will be discarded before sampling from what is then assumed to be the posterior distribution desired.
 threads (int, optional, default 1) – The number of threads for parallelization.
Default is
1
, i.e. no parallelization  print_results (bool, optional, default True) – Boolean flag setting whether the results of the MCMC run should be printed to standard output
 plot (bool, optional, default False) – Boolean flag setting whether summary plots of the MCMC chains should be produced
 namestr (str, optional, default
test
) – Optional string for output file names for the plotting.
Returns: res – An object containing useful summaries of the sampling procedure. For details see documentation of
SamplingResults
.Return type: SamplingResults
object lpost (instance of a

simulate_highest_outlier
(s_all, lpost, t0, max_post=True, seed=None)[source]¶ Simulate \(n\) power spectra from a model and then find the highest data/model outlier in each.
The data/model outlier is defined as
\[\max{(T_R = 2(\mathrm{data}/\mathrm{model}))} .\]Parameters:  s_all (numpy.ndarray) – A list of parameter values derived either from an approximation of the
likelihood surface, or from an MCMC run. Has dimensions
(n, ndim)
, wheren
is the number of simulated power spectra to generate, andndim
the number of model parameters.  lpost (instance of a
Posterior
subclass) – an instance of classPosterior
or one of its subclasses that defines the function to be minimized (either inloglikelihood
orlogposterior
)  t0 (iterable) – list or array containing the starting parameters. Its length
must match
lpost.model.npar
.  max_post (bool, optional, default
False
) – IfTrue
, do MAP fits on the power spectrum to find the highest data/model outlier Otherwise, do a Maximum Likelihood fit. If True, the simulated power spectra will be generated from an MCMC run, otherwise the method will employ the approximated covariance matrix for the parameters derived from the likelihood surface to generate samples from that likelihood function.  seed (int, optional, default
None
) – An optional number to seed the random number generator with, for reproducibility of the results obtained with this method.
Returns: max_y_all – An array of maximum outliers for each simulated power spectrum
Return type: numpy.ndarray
 s_all (numpy.ndarray) – A list of parameter values derived either from an approximation of the
likelihood surface, or from an MCMC run. Has dimensions

simulate_lrts
(s_all, lpost1, t1, lpost2, t2, seed=None)[source]¶ Simulate likelihood ratios for two given models based on MCMC samples for the simpler model (i.e. the null hypothesis).
Parameters:  s_all (numpy.ndarray of shape
(nsamples, lpost1.npar)
) – An array with MCMC samples derived from the null hypothesis model inlpost1
. Its second dimension must match the number of free parameters inlpost1.model
.  lpost1 (
LogLikelihood
orPosterior
subclass object) – Object containing the null hypothesis model  t1 (iterable of length
lpost1.npar
) – A starting guess for fitting the model inlpost1
 lpost2 (
LogLikelihood
orPosterior
subclass object) – Object containing the alternative hypothesis model  t2 (iterable of length
lpost2.npar
) – A starting guess for fitting the model inlpost2
 max_post (bool, optional, default
True
) – IfTrue
, thenlpost1
andlpost2
should bePosterior
subclass objects; ifFalse
, thenlpost1
andlpost2
should beLogLikelihood
subclass objects  seed (int, optional default
None
) – A seed to initialize thenumpy.random.RandomState
object to be passed on to_generate_data
. Useful for producing exactly reproducible results
Returns: lrt_sim – An array with the simulated likelihood ratios for the simulated data
Return type: numpy.ndarray
 s_all (numpy.ndarray of shape
Auxiliary Classes¶
These are helper classes instantiated by stingray.modeling.ParameterEstimation
and its subclasses to
organize the results of model fitting and sampling in a more meaningful, easily accessible way.

class
stingray.modeling.
OptimizationResults
(lpost, res, neg=True)[source]¶ Helper class that will contain the results of the regression. Less fiddly than a dictionary.
Parameters:  lpost (instance of
Posterior
or one of its subclasses) – The object containing the function that is being optimized in the regression  res (instance of
scipy.OptimizeResult
) – The object containing the results from a optimization run

neg
¶ A flag that sets whether the loglikelihood or negative loglikelihood is being used
Type: bool, optional, default True

result
¶ The result of the optimization, i.e. the function value at the minimum that the optimizer found
Type: float

p_opt
¶ The list of parameters at the minimum found by the optimizer
Type: iterable

model
¶ The parametric model fit to the data
Type: astropy.models.Model
instance

cov
¶ The covariance matrix for the parameters, has shape
(len(p_opt), len(p_opt))
Type: numpy.ndarray

err
¶ The standard deviation of the parameters, derived from the diagonal of
cov
. Has the same shape asp_opt
Type: numpy.ndarray

mfit
¶ The values of the model for all
x
Type: numpy.ndarray

aic
¶ The Akaike Information Criterion, derived from the log(likelihood) and often used in model comparison between nonnested models; For more details, see [aic]
Type: float

bic
¶ The Bayesian Information Criterion, derived from the log(likelihood) and often used in model comparison between nonnested models; For more details, see [bic]
Type: float

dof
¶ The number of degrees of freedom in the problem, defined as the number of data points  the number of parameters
Type: int
References
[aic] http://ieeexplore.ieee.org/document/1100705/?reload=true [bic] https://projecteuclid.org/euclid.aos/1176344136 
_compute_covariance
(lpost, res)[source]¶ Compute the covariance of the parameters using inverse of the Hessian, i.e. the secondorder derivative of the loglikelihood. Also calculates an estimate of the standard deviation in the parameters, using the square root of the diagonal of the covariance matrix.
The Hessian is either estimated directly by the chosen method of fitting, or approximated using the
statsmodel
approx_hess
function.Parameters:  lpost (instance of
Posterior
or one of its subclasses) – The object containing the function that is being optimized in the regression  res (instance of
scipy
’sOptimizeResult
class) – The object containing the results from a optimization run
 lpost (instance of

_compute_criteria
(lpost)[source]¶ Compute various information criteria useful for model comparison in nonnested models.
Currently implemented are the Akaike Information Criterion [aic] and the Bayesian Information Criterion [bic].
Parameters: lpost (instance of Posterior
or one of its subclasses) – The object containing the function that is being optimized in the regressionReferences
[aic] http://ieeexplore.ieee.org/document/1100705/?reload=true [bic] https://projecteuclid.org/euclid.aos/1176344136

_compute_model
(lpost)[source]¶ Compute the values of the bestfit model for all
x
.Parameters: lpost (instance of Posterior
or one of its subclasses) – The object containing the function that is being optimized in the regression

_compute_statistics
(lpost)[source]¶ Compute some useful fit statistics, like the degrees of freedom and the figure of merit.
Parameters: lpost (instance of Posterior
or one of its subclasses) – The object containing the function that is being optimized in the regression

print_summary
(lpost, log=None)[source]¶ Print a useful summary of the fitting procedure to screen or a log file.
Parameters:  lpost (instance of
Posterior
or one of its subclasses) – The object containing the function that is being optimized in the regression  log (logging handler, optional, default None) – A handler used for logging the output properly
 lpost (instance of
 lpost (instance of

class
stingray.modeling.
SamplingResults
(sampler, ci_min=5, ci_max=95)[source]¶ Helper class that will contain the results of the sampling in a handy format.
Less fiddly than a dictionary.
Parameters:  sampler (
emcee.EnsembleSampler
object) – The object containing the sampler that’s done all the work.  ci_min (float out of [0,100]) – The lower bound percentile for printing credible intervals on the parameters
 ci_max (float out of [0,100]) – The upper bound percentile for printing credible intervals on the parameters

samples
¶ An array of samples from the MCMC run, including all chains flattened into one long (
nwalkers*niter
,ndim
) arrayType: numpy.ndarray

acor
¶ The autocorrelation length for the chains; should be shorter than the chains themselves for independent sampling
Type: float

rhat
¶ weighted average of betweensequence variance and withinsequence variance; GelmanRubin convergence statistic [gelmanrubin]
Type: float

mean
¶ An array of size
ndim
, with the posterior means of the parameters derived from the MCMC chainsType: numpy.ndarray

std
¶ An array of size
ndim
with the posterior standard deviations of the parameters derived from the MCMC chainsType: numpy.ndarray

ci
¶ An array of shape
(ndim, 2)
containing the lower and upper bounds of the credible interval (the Bayesian equivalent of the confidence interval) for each parameter using the bounds set byci_min
andci_max
Type: numpy.ndarray
References
[gelmanrubin] https://projecteuclid.org/euclid.ss/1177011136 
_check_convergence
(sampler)[source]¶ Compute common statistics for convergence of the MCMC chains. While you can never be completely sure that your chains converged, these present reasonable heuristics to give an indication whether convergence is very far off or reasonably close.
Currently implemented are the autocorrelation time [autocorr] and the GelmanRubin convergence criterion [gelmanrubin].
Parameters: sampler (an emcee.EnsembleSampler
object) –References
[autocorr] https://arxiv.org/abs/1202.3665 [gelmanrubin] https://projecteuclid.org/euclid.ss/1177011136

_compute_rhat
(sampler)[source]¶ Compute GelmanRubin convergence criterion [gelmanrubin].
Parameters: sampler (an emcee.EnsembleSampler object) – References
[gelmanrubin] https://projecteuclid.org/euclid.ss/1177011136

_infer
(ci_min=5, ci_max=95)[source]¶ Infer the
Posterior
means, standard deviations and credible intervals (i.e. the Bayesian equivalent to confidence intervals) from thePosterior
samples for each parameter.Parameters:

plot_results
(nsamples=1000, fig=None, save_plot=False, filename='test.pdf')[source]¶ Plot some results in a triangle plot. If installed, will use [corner] for the plotting, if not, uses its own code to make a triangle plot.
By default, this method returns a
matplotlib.Figure
object, but ifsave_plot=True
the plot can be saved to file automatically,Parameters:  nsamples (int, default 1000) – The maximum number of samples used for plotting.
 fig (matplotlib.Figure instance, default None) – If created externally, you can pass a Figure instance to this method. If none is passed, the method will create one internally.
 save_plot (bool, default
False
) – IfTrue
save the plot to file with a file name specified by the keywordfilename
. IfFalse
just return theFigure
object  filename (str) – Name of the output file with the figure
References
[corner] https://github.com/dfm/corner.py
 sampler (
Convenience Functions¶
These functions are designed to help the user perform common tasks related to modelling and parameter
estimation. In particular, the function stingray.modeling.set_logprior()
is designed to
help users set priors in their stingray.modeling.Posterior
subclass objects.

stingray.modeling.
set_logprior
(lpost, priors)[source]¶ This function constructs the
logprior
method required to successfully use aPosterior
object.All instances of class
Posterior
and its subclasses require to implement alogprior
methods. However, priors are strongly problemdependent and therefore usually userdefined.This function allows for setting the
logprior
method on any instance of classPosterior
efficiently by allowing the user to pass a dictionary of priors and an instance of classPosterior
.Parameters:  lpost (
Posterior
object) – An instance of classPosterior
or any of its subclasses  priors (dict) – A dictionary containing the prior definitions. Keys are parameter
names as defined by the
astropy.models.FittableModel
instance supplied to themodel
parameter inPosterior
. Items are functions that take a parameter as input and return the logprior probability of that parameter.
Returns: logprior – The function definition for the prior
Return type: function
Examples
Make a light curve and power spectrum
>>> photon_arrivals = np.sort(np.random.uniform(0,1000, size=10000)) >>> lc = Lightcurve.make_lightcurve(photon_arrivals, dt=1.0) >>> ps = Powerspectrum(lc, norm="frac")
Define the model
>>> pl = models.PowerLaw1D() >>> pl.x_0.fixed = True
Instantiate the posterior:
>>> lpost = PSDPosterior(ps.freq, ps.power, pl, m=ps.m)
Define the priors:
>>> p_alpha = lambda alpha: ((1. <= alpha) & (alpha <= 5.)) >>> p_amplitude = lambda amplitude: ((10 <= np.log(amplitude)) & ... ((np.log(amplitude) <= 10.0))) >>> priors = {"alpha":p_alpha, "amplitude":p_amplitude}
Set the logprior method in the lpost object:
>>> lpost.logprior = set_logprior(lpost, priors)
 lpost (

stingray.modeling.scripts.
fit_powerspectrum
(ps, model, starting_pars=None, max_post=False, priors=None, fitmethod='LBFGSB')[source]¶ Fit a number of Lorentzians to a power spectrum, possibly including white noise. Each Lorentzian has three parameters (amplitude, centroid position, fullwidth at half maximum), plus one extra parameter if the white noise level should be fit as well. Priors for each parameter can be included in case max_post = True, in which case the function will attempt a MaximumAPosteriori fit. Priors must be specified as a dictionary with one entry for each parameter. The parameter names are (amplitude_i, x_0_i, fwhm_i) for each i out of a total of N Lorentzians. The white noise level has a parameter amplitude_(N+1). For example, a model with two Lorentzians and a white noise level would have parameters: [amplitude_0, x_0_0, fwhm_0, amplitude_1, x_0_1, fwhm_1, amplitude_2].
Parameters:  ps (Powerspectrum) – A Powerspectrum object with the data to be fit
 model (astropy.modeling.models class instance) – The parametric model supposed to represent the data. For details see the astropy.modeling documentation
 starting_pars (iterable, optional, default None) – The list of starting guesses for the optimizer. If it is not provided, then default parameters are taken from model. See explanation above for ordering of parameters in this list.
 fit_whitenoise (bool, optional, default True) – If True, the code will attempt to fit a white noise level along with the Lorentzians. Be sure to include a starting parameter for the optimizer in starting_pars!
 max_post (bool, optional, default False) – If True, perform a MaximumAPosteriori fit of the data rather than a Maximum Likelihood fit. Note that this requires priors to be specified, otherwise this will cause an exception!
 priors ({dict  None}, optional, default None) – Dictionary with priors for the MAP fit. This should be of the form {“parameter name”: probability distribution, …}
 fitmethod (string, optional, default "LBFGSB") – Specifies an optimization algorithm to use. Supply any valid option for scipy.optimize.minimize.
Returns:  parest (PSDParEst object) – A PSDParEst object for further analysis
 res (OptimizationResults object) – The OptimizationResults object storing useful results and quantities relating to the fit
Examples
We start by making an example power spectrum with three Lorentzians >>> m = 1 >>> nfreq = 100000 >>> freq = np.linspace(1, 1000, nfreq)
>>> np.random.seed(100) # set the seed for the random number generator >>> noise = np.random.exponential(size=nfreq)
>>> model = models.PowerLaw1D() + models.Const1D() >>> model.x_0_0.fixed = True
>>> alpha_0 = 2.0 >>> amplitude_0 = 100.0 >>> amplitude_1 = 2.0
>>> model.alpha_0 = alpha_0 >>> model.amplitude_0 = amplitude_0 >>> model.amplitude_1 = amplitude_1
>>> p = model(freq) >>> power = noise * p
>>> ps = Powerspectrum() >>> ps.freq = freq >>> ps.power = power >>> ps.m = m >>> ps.df = freq[1]  freq[0] >>> ps.norm = "leahy"
Now we have to guess starting parameters. For each Lorentzian, we have amplitude, centroid position and fwhm, and this pattern repeats for each Lorentzian in the fit. The white noise level is the last parameter. >>> t0 = [80, 1.5, 2.5]
Let’s also make a model to test: >>> model_to_test = models.PowerLaw1D() + models.Const1D() >>> model_to_test.amplitude_1.fixed = True
We’re ready for doing the fit: >>> parest, res = fit_powerspectrum(ps, model_to_test, t0)
res contains a whole array of useful information about the fit, for example the parameters at the optimum: >>> p_opt = res.p_opt

stingray.modeling.scripts.
fit_crossspectrum
(cs, model, starting_pars=None, max_post=False, priors=None, fitmethod='LBFGSB')[source]¶ Fit a number of Lorentzians to a cross spectrum, possibly including white noise. Each Lorentzian has three parameters (amplitude, centroid position, fullwidth at half maximum), plus one extra parameter if the white noise level should be fit as well. Priors for each parameter can be included in case max_post = True, in which case the function will attempt a MaximumAPosteriori fit. Priors must be specified as a dictionary with one entry for each parameter. The parameter names are (amplitude_i, x_0_i, fwhm_i) for each i out of a total of N Lorentzians. The white noise level has a parameter amplitude_(N+1). For example, a model with two Lorentzians and a white noise level would have parameters: [amplitude_0, x_0_0, fwhm_0, amplitude_1, x_0_1, fwhm_1, amplitude_2].
Parameters:  cs (Crossspectrum) – A Crossspectrum object with the data to be fit
 model (astropy.modeling.models class instance) – The parametric model supposed to represent the data. For details see the astropy.modeling documentation
 starting_pars (iterable, optional, default None) – The list of starting guesses for the optimizer. If it is not provided, then default parameters are taken from model. See explanation above for ordering of parameters in this list.
 max_post (bool, optional, default False) – If True, perform a MaximumAPosteriori fit of the data rather than a Maximum Likelihood fit. Note that this requires priors to be specified, otherwise this will cause an exception!
 priors ({dict  None}, optional, default None) – Dictionary with priors for the MAP fit. This should be of the form {“parameter name”: probability distribution, …}
 fitmethod (string, optional, default "LBFGSB") – Specifies an optimization algorithm to use. Supply any valid option for scipy.optimize.minimize.
Returns:  parest (PSDParEst object) – A PSDParEst object for further analysis
 res (OptimizationResults object) – The OptimizationResults object storing useful results and quantities relating to the fit

stingray.modeling.scripts.
fit_lorentzians
(ps, nlor, starting_pars, fit_whitenoise=True, max_post=False, priors=None, fitmethod='LBFGSB')[source]¶ Fit a number of Lorentzians to a power spectrum, possibly including white noise. Each Lorentzian has three parameters (amplitude, centroid position, fullwidth at half maximum), plus one extra parameter if the white noise level should be fit as well. Priors for each parameter can be included in case max_post = True, in which case the function will attempt a MaximumAPosteriori fit. Priors must be specified as a dictionary with one entry for each parameter. The parameter names are (amplitude_i, x_0_i, fwhm_i) for each i out of a total of N Lorentzians. The white noise level has a parameter amplitude_(N+1). For example, a model with two Lorentzians and a white noise level would have parameters: [amplitude_0, x_0_0, fwhm_0, amplitude_1, x_0_1, fwhm_1, amplitude_2].
Parameters:  ps (Powerspectrum) – A Powerspectrum object with the data to be fit
 nlor (int) – The number of Lorentzians to fit
 starting_pars (iterable) – The list of starting guesses for the optimizer. If it is not provided, then default parameters are taken from model. See explanation above for ordering of parameters in this list.
 fit_whitenoise (bool, optional, default True) – If True, the code will attempt to fit a white noise level along with the Lorentzians. Be sure to include a starting parameter for the optimizer in starting_pars!
 max_post (bool, optional, default False) – If True, perform a MaximumAPosteriori fit of the data rather than a Maximum Likelihood fit. Note that this requires priors to be specified, otherwise this will cause an exception!
 priors ({dict  None}, optional, default None) – Dictionary with priors for the MAP fit. This should be of the form {“parameter name”: probability distribution, …}
 fitmethod (string, optional, default "LBFGSB") – Specifies an optimization algorithm to use. Supply any valid option for scipy.optimize.minimize.
Returns:  parest (PSDParEst object) – A PSDParEst object for further analysis
 res (OptimizationResults object) – The OptimizationResults object storing useful results and quantities relating to the fit
Examples
We start by making an example power spectrum with three Lorentzians >>> np.random.seed(400) >>> nlor = 3
>>> x_0_0 = 0.5 >>> x_0_1 = 2.0 >>> x_0_2 = 7.5
>>> amplitude_0 = 150.0 >>> amplitude_1 = 50.0 >>> amplitude_2 = 15.0
>>> fwhm_0 = 0.1 >>> fwhm_1 = 1.0 >>> fwhm_2 = 0.5
We will also include a white noise level: >>> whitenoise = 2.0
>>> model = (models.Lorentz1D(amplitude_0, x_0_0, fwhm_0) + ... models.Lorentz1D(amplitude_1, x_0_1, fwhm_1) + ... models.Lorentz1D(amplitude_2, x_0_2, fwhm_2) + ... models.Const1D(whitenoise))
>>> freq = np.linspace(0.01, 10.0, 1000) >>> p = model(freq) >>> noise = np.random.exponential(size=len(freq))
>>> power = p*noise >>> ps = Powerspectrum() >>> ps.freq = freq >>> ps.power = power >>> ps.df = ps.freq[1]  ps.freq[0] >>> ps.m = 1
Now we have to guess starting parameters. For each Lorentzian, we have amplitude, centroid position and fwhm, and this pattern repeats for each Lorentzian in the fit. The white noise level is the last parameter. >>> t0 = [150, 0.4, 0.2, 50, 2.3, 0.6, 20, 8.0, 0.4, 2.1]
We’re ready for doing the fit: >>> parest, res = fit_lorentzians(ps, nlor, t0)
res contains a whole array of useful information about the fit, for example the parameters at the optimum: >>> p_opt = res.p_opt
Pulsar¶
This submodule broadly defines functionality related to (Xray) pulsar data analysis, especially periodicity searches.

stingray.pulse.
fftfit
(prof, template=None, quick=False, sigma=None, use_bootstrap=False, **fftfit_kwargs)[source]¶ Align a template to a pulse profile.
Parameters:  prof (array) – The pulse profile
 template (array, default None) – The template of the pulse used to perform the TOA calculation. If None, a simple sinusoid is used
Other Parameters:  sigma (array) – error on profile bins (currently has no effect)
 use_bootstrap (bool) – Calculate errors using a bootstrap method, with fftfit_error
 **fftfit_kwargs (additional arguments for fftfit_error)
Returns:  mean_amp, std_amp (floats) – Mean and standard deviation of the amplitude
 mean_phase, std_phase (floats) – Mean and standard deviation of the phase

stingray.pulse.
fftfit_error
(template, sigma=None, **fftfit_kwargs)[source]¶ Calculate the error on the fit parameters from FFTFIT.
Parameters:  phase (array) – The phases corresponding to each bin of the profile
 prof (array) – The pulse profile
 template (array) – The template of the pulse used to perform the TOA calculation
 p0 (list) – The initial parameters for the fit
Other Parameters:  nstep (int, optional, default 100) – Number of steps for the bootstrap method
 sigma (array, default None) – error on profile bins. If None, the square root of the mean profile is used.
Returns:  mean_amp, std_amp (floats) – Mean and standard deviation of the amplitude
 mean_phase, std_phase (floats) – Mean and standard deviation of the phase

stingray.pulse.
fftfit_fun
(profile, template, amplitude, phase)[source]¶ Function to be minimized for the FFTFIT method.
From Taylor (1992).
Parameters:  profile (array) – The pulse profile
 template (array) – A pulse shape template, of the same length as profile.
 phase (amplitude,) – The amplitude and phase of the template w.r.t the real profile.
Returns: fftfit_chisq – The chi squarelike statistics of FFTFIT
Return type:

stingray.pulse.
fold_events
(times, *frequency_derivatives, **opts)[source]¶ Epoch folding with exposure correction.
Parameters:  times (array of floats) –
 fdot, fddot... (f,) – The frequency and any number of derivatives.
Other Parameters:  nbin (int, optional, default 16) – The number of bins in the pulse profile
 weights (float or array of floats, optional) – The weights of the data. It can either be specified as a single value
for all points, or an array with the same length as
time
 gtis ([[gti0_0, gti0_1], [gti1_0, gti1_1], …], optional) – Good time intervals
 ref_time (float, optional, default 0) – Reference time for the timing solution
 expocorr (bool, default False) – Correct each bin for exposure (use when the period of the pulsar is comparable to that of GTIs)
Returns:  phase_bins (array of floats)
 The phases corresponding to the pulse profile
 profile (array of floats)
 The pulse profile
 profile_err (array of floats)
 The uncertainties on the pulse profile

stingray.pulse.
get_TOA
(prof, period, tstart, template=None, additional_phase=0, quick=False, debug=False, use_bootstrap=False, **fftfit_kwargs)[source]¶ Calculate the TimeOfArrival of a pulse.
Parameters:  prof (array) – The pulse profile
 template (array, default None) – The template of the pulse used to perform the TOA calculation, if any. Otherwise use the default of fftfit
 tstart (float) – The time at the start of the pulse profile
Other Parameters: nstep (int, optional, default 100) – Number of steps for the bootstrap method
Returns: toa, toastd – Mean and standard deviation of the TOA
Return type: floats

stingray.pulse.
phase_exposure
(start_time, stop_time, period, nbin=16, gtis=None)[source]¶ Calculate the exposure on each phase of a pulse profile.
Parameters:  stop_time (start_time,) – Starting and stopping time (or phase if
period==1
)  period (float) – The pulse period (if 1, equivalent to phases)
Other Parameters:  nbin (int, optional, default 16) – The number of bins in the profile
 gtis ([[gti00, gti01], [gti10, gti11], …], optional, default None) – Good Time Intervals
Returns: expo – The normalized exposure of each bin in the pulse profile (1 is the highest exposure, 0 the lowest)
Return type: array of floats
 stop_time (start_time,) – Starting and stopping time (or phase if

stingray.pulse.
profile_stat
(profile, err=None)[source]¶ Calculate the epoch folding statistics ‘a la Leahy et al. (1983).
Parameters: profile (array) – The pulse profile Other Parameters: err (float or array) – The uncertainties on the pulse profile Returns: stat – The epoch folding statistics Return type: float

stingray.pulse.
pulse_phase
(times, *frequency_derivatives, **opts)[source]¶ Calculate pulse phase from the frequency and its derivatives.
Parameters:  times (array of floats) – The times at which the phase is calculated
 *frequency_derivatives (floats) – List of derivatives in increasing order, starting from zero.
Other Parameters:  ph0 (float) – The starting phase
 to_1 (bool, default True) – Only return the fractional part of the phase, normalized from 0 to 1
Returns: phases – The absolute pulse phase
Return type: array of floats

stingray.pulse.
z_n
(phase, n=2, norm=1)[source]¶ Z^2_n statistics, a` la Buccheri+03, A&A, 128, 245, eq. 2.
Parameters:  phase (array of floats) – The phases of the events
 n (int, default 2) – Number of harmonics, including the fundamental
Other Parameters: norm (float or array of floats) – A normalization factor that gets multiplied as a weight.
Returns: z2_n – The Z^2_n statistics of the events.
Return type:
Simulator¶
This submodule defines extensive functionality related to simulating spectraltiming data sets, including transfer and window functions, simulating light curves from power spectra for a range of stochastic processes.

class
stingray.simulator.simulator.
Simulator
(dt=1, N=1024, mean=0, rms=1, red_noise=1, random_state=None, tstart=0.0)[source]¶ Methods to simulate and visualize light curves.
TODO: Improve documentation
Parameters:  dt (int, default 1) – time resolution of simulated light curve
 N (int, default 1024) – bins count of simulated light curve
 mean (float, default 0) – mean value of the simulated light curve
 rms (float, default 1) – fractional rms of the simulated light curve, actual rms is calculated by mean*rms
 red_noise (int, default 1) – multiple of real length of light curve, by which to simulate, to avoid red noise leakage
 random_state (int, default None) – seed value for random processes

powerspectrum
(lc, seg_size=None)[source]¶ Make a powerspectrum of the simulated light curve.
Parameters: lc (lightcurve.Lightcurve object OR) – iterable of lightcurve.Lightcurve objects The light curve data to be Fouriertransformed. Returns: power – The array of normalized squared absolute values of Fourier amplitudes Return type: numpy.ndarray

static
read
(filename, format_='pickle')[source]¶ Imports Simulator object.
Parameters: Returns: object
Return type: Simulator object

relativistic_ir
(t1=3, t2=4, t3=10, p1=1, p2=1.4, rise=0.6, decay=0.1)[source]¶ Construct a realistic impulse response considering the relativistic effects.
Parameters:  t1 (int) – primary peak time
 t2 (int) – secondary peak time
 t3 (int) – end time
 p1 (float) – value of primary peak
 p2 (float) – value of secondary peak
 rise (float) – slope of rising exponential from primary peak to secondary peak
 decay (float) – slope of decaying exponential from secondary peak to end time
Returns: h – Constructed impulse response
Return type: numpy.ndarray

simple_ir
(start=0, width=1000, intensity=1)[source]¶ Construct a simple impulse response using start time, width and scaling intensity. To create a delta impulse response, set width to 1.
Parameters: Returns: h – Constructed impulse response
Return type: numpy.ndarray

simulate
(*args)[source]¶ Simulate light curve generation using power spectrum or impulse response.
Examples
 x = simulate(beta):
For generating a light curve using power law spectrum.
 Parameters:
 beta : float Defines the shape of spectrum
 x = simulate(s):
 For generating a light curve from userprovided spectrum.
Note: In this case, the red_noise parameter is provided. You can generate a longer light curve by providing a higher frequency resolution on the input power spectrum.
 Parameters:
 s : arraylike power spectrum
 x = simulate(model):
For generating a light curve from predefined model
 Parameters:
 model : astropy.modeling.Model the predefined model
 x = simulate(‘model’, params):
For generating a light curve from predefined model
 Parameters:
 model : string the predefined model
 params : list iterable or dict the parameters for the predefined model
 x = simulate(s, h):
For generating a light curve using impulse response.
 Parameters:
 s : arraylike Underlying variability signal
 h : arraylike Impulse response
 x = simulate(s, h, ‘same’):
For generating a light curve of same length as input signal, using impulse response.
 Parameters:
 s : arraylike Underlying variability signal
 h : arraylike Impulse response
 mode : str mode can be ‘same’, ‘filtered, or ‘full’. ‘same’ indicates that the length of output light curve is same as that of input signal. ‘filtered’ means that length of output light curve is len(s)  lag_delay ‘full’ indicates that the length of output light curve is len(s) + len(h) 1
Parameters: args – See examples below. Returns: lightCurve Return type: LightCurve object