Most stingray` classes are subclasses of a single class, stingray.StingrayObject, which
implements most of the I/O functionality and common behavior (e.g. strategies to combine data and
make operations such as the sum, difference, or negation). This class is not intended to be
instantiated directly, but rather to be used as a base class for other classes. Any class wanting
to inherit from stingray.StingrayObject should define a main_array_attr attribute, which
defines the name of the attribute that will be used to store the “independent variable” main data array.
For example, for all time series-like objects, the main array is the time array, while for the
periodograms the main array is the frequency array.
All arrays sharing the length (not the shape: they might be multi-dimensional!) of the main array are called
“array attributes” and are accessible through the array_attrs method.
When applying a mask or any other selection to a stingray.StingrayObject,
all array attributes are filtered in the same way. Some array-like attributes might have the same length
by chance, in this case the user or the developer should add these to the not_array_attr attribute.
For example, stingray.StingrayTimeseries has gti among the not_array_attrs, since it is an
array but not related 1-to-1 to the main array, even if in some cases it might happen to have the same numbers
of elements of the main array, which is time.
This base class defines some general-purpose utilities.
The main purpose is to have a consistent mechanism for:
round-tripping to and from Astropy Tables and other dataframes
round-tripping to files in different formats
The idea is that any object inheriting StingrayObject should,
just by defining an attribute called main_array_attr, be able to perform
the operations above, with no additional effort.
main_array_attr is, e.g. time for StingrayTimeseries and
Lightcurve, freq for Crossspectrum, energy for
VarEnergySpectrum, and so on. It is the array with which all other
attributes are compared: if they are of the same shape, they get saved as
columns of the table/dataframe, otherwise as metadata.
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Apply a mask to all array attributes of the object
Parameters:
maskarray of bool
The mask. Has to be of the same length as self.time
Returns:
ts_newStingrayObject object
The new object with the mask applied if inplace is False, otherwise the
same object.
Other Parameters:
inplacebool
If True, overwrite the current object. Otherwise, return a new one.
filtered_attrslist of str or None
Array attributes to be filtered. Defaults to all array attributes if None.
The other array attributes will be set to None. The main array attr is always
included.
Clean up the list of attributes, only giving out those pointing to data.
List all the attributes that point directly to valid data. This method goes through all the
attributes of the class, eliminating methods, properties, and attributes that are complicated
to serialize such as other StingrayObject, or arrays of objects.
This function does not make difference between array-like data and scalar data.
Returns:
data_attributeslist of str
List of attributes pointing to data that are not methods, properties,
or other StingrayObject instances.
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Create an StingrayObject object from data in a pandas DataFrame.
The dataframe MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Since pandas does not support n-D data, multi-dimensional arrays can be
specified as <colname>_dimN_M_K etc.
See documentation of make_1d_arrays_into_nd for details.
The dataset MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
List the names of the internal array attributes of the Stingray Object.
These are array attributes that can be set by properties, and are generally indicated
by an underscore followed by the name of the property that links to it (E.g.
_counts in Lightcurve).
By array attributes, we mean the ones with the same size and shape as
main_array_attr (e.g. time in EventList)
Return a pretty-printed string representation of the object.
This is useful for debugging, and for interactive use.
Other Parameters:
func_to_applyfunction
A function that modifies the attributes listed in attrs_to_apply.
It must return the modified attributes and a label to be printed.
If None, no function is applied.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
Files that need the astropy.table.Table interface MUST contain
at least a column named like the main_array_attr.
The default ascii format is enhanced CSV (ECSV). Data formats
supporting the serialization of metadata (such as ECSV and HDF5) can
contain all attributes such as mission, gti, etc with
no significant loss of information. Other file formats might lose part
of the metadata, so must be used with care.
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Path and file name for the file to be read.
fmt: str
Available options are ‘pickle’, ‘hea’, and any Table-supported
format such as ‘hdf5’, ‘ascii.ecsv’, etc.
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the meta dictionary.
Other Parameters:
no_longdoublebool
If True, reduce the precision of longdouble arrays to double precision.
This needs to be done in some cases, e.g. when the table is to be saved
in an architecture not supporting extended precision (e.g. ARM), but can
also be useful when an extended precision is not needed.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
Since pandas does not support n-D data, multi-dimensional arrays are
converted into columns before the conversion, with names <colname>_dimN_M_K etc.
See documentation of make_nd_into_arrays for details.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Name and path of the file to save the object list to.
fmt: str
The file format to store the data in.
Available options are pickle, hdf5, ascii, fits
All time series-like data classes inherit from stingray.StingrayTimeseries, which
implements most of the common functionality. The main data array is stored in the time
attribute.
Good Time Intervals (GTIs) are stored in the gti attribute, which is a list of 2-tuples or 2-lists
containing the start and stop times of each GTI. The gti attribute is not an array attribute, since
it is not related 1-to-1 to the main array, even if in some cases it might happen to have the same number
of elements of the main array. It is by default added to the not_array_attr attribute.
This can be events, binned light curves, unevenly sampled light curves, etc. The only
requirement is that the data (which can be any quantity, related or not to an electromagnetic
measurement) are associated with a time measurement.
We make a distinction between the array attributes, which have the same length of the
time array, and the meta attributes, which can be scalars or arrays of different
size. The array attributes can be multidimensional (e.g. a spectrum for each time bin),
but their first dimension (array.shape[0]) must have same length of the time array.
Array attributes are singled out automatically depending on their shape. All filtering
operations (e.g. apply_gtis, rebin, etc.) are applied to array attributes only.
For this reason, it is advisable to specify whether a given attribute should not be
considered as an array attribute by adding it to the not_array_attr list.
Parameters:
time: iterable
A list or array of time stamps
Other Parameters:
array_attrsdict
Array attributes to be set (e.g. {"flux":flux_array,"flux_err":flux_err_array}).
In principle, they could be specified as simple keyword arguments. But this way, we
will run a check on the length of the arrays, and raise an error if they are not of a
shape compatible with the time array.
dt: float
The time resolution of the time series. Can be a scalar or an array attribute (useful
for non-evenly sampled data or events from different instruments)
mjdreffloat
The MJD used as a reference for the time array.
gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Good Time Intervals
high_precisionbool
Change the precision of self.time to float128. Useful while dealing with fast pulsars.
timerefstr
The time reference, as recorded in the FITS file (e.g. SOLARSYSTEM)
timesysstr
The time system, as recorded in the FITS file (e.g. TDB)
ephemstr
The JPL ephemeris used to barycenter the data, if any (e.g. DE430)
skip_checksbool
Skip checks on the time array. Useful when the user is reasonably sure that the
input data are valid.
**other_kw
Used internally. Any other keyword arguments will be set as attributes of the object.
Attributes:
time: numpy.ndarray
The array of time stamps, in seconds from the reference
MJD defined in mjdref
not_array_attr: list
List of attributes that are never to be considered as array attributes. For example, GTIs
are not array attributes.
dt: float
The time resolution of the measurements. Can be a scalar or an array attribute (useful
for non-evenly sampled data or events from different instruments). It can also be 0, which
means that the time series is not evenly sampled and the effects of the time resolution are
considered negligible for the analysis. This is sometimes the case for events from
high-energy telescopes.
mjdreffloat
The MJD used as a reference for the time array.
gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Good Time Intervals
high_precisionbool
Change the precision of self.time to float128. Useful while dealing with fast pulsars.
Analyze the light curve with any function, on a GTI-by-GTI base.
Parameters:
funcfunction
Function accepting a StingrayTimeseries object as single argument, plus
possible additional keyword arguments, and returning a number or a
tuple - e.g., (result,error) where both result and error are
numbers.
Returns:
start_timesarray
Lower time boundaries of all time segments.
stop_timesarray
upper time boundaries of all segments.
resultarray of N elements
The result of func for each segment of the light curve
Other Parameters:
fraction_stepfloat
By default, segments do not overlap (fraction_step = 1). If fraction_step < 1,
then the start points of consecutive segments are fraction_step*segment_size
apart, and consecutive segments overlap. For example, for fraction_step = 0.5,
the window shifts one half of segment_size)
kwargskeyword arguments
These additional keyword arguments, if present, they will be passed
to func
Analyze segments of the light curve with any function.
Parameters:
funcfunction
Function accepting a StingrayTimeseries object as single argument, plus
possible additional keyword arguments, and returning a number or a
tuple - e.g., (result,error) where both result and error are
numbers.
segment_sizefloat
Length in seconds of the light curve segments. If None, the full GTIs are considered
instead as segments.
Returns:
start_timesarray
Lower time boundaries of all time segments.
stop_timesarray
upper time boundaries of all segments.
resultarray of N elements
The result of func for each segment of the light curve
Other Parameters:
fraction_stepfloat
If the step is not a full segment_size but less (e.g. a moving window),
this indicates the ratio between step step and segment_size (e.g.
0.5 means that the window shifts of half segment_size)
kwargskeyword arguments
These additional keyword arguments, if present, they will be passed
to func
Examples
>>> importnumpyasnp>>> time=np.arange(0,10,0.1)>>> counts=np.zeros_like(time)+10>>> ts=StingrayTimeseries(time,counts=counts,dt=0.1)>>> # Define a function that calculates the mean>>> mean_func=lambdats:np.mean(ts.counts)>>> # Calculate the mean in segments of 5 seconds>>> start,stop,res=ts.analyze_segments(mean_func,5)>>> len(res)==2True>>> np.allclose(res,10)True
This method concatenates two or more StingrayTimeseries objects along the time
axis. GTIs are recalculated by merging all the GTIs together. GTIs should not overlap at
any point.
Estimate a reasonable segment length for segment-by-segment analysis.
The user has to specify a criterion based on a minimum number of counts (if
the time series has a counts attribute) or a minimum number of time samples.
At least one between min_counts and min_samples must be specified.
In the special case of a time series with dt=0 (event list-like, where each time
stamp correspond to a single count), the two definitions are equivalent.
Returns:
segment_sizefloat
The length of the light curve chunks that satisfies the conditions
Other Parameters:
min_countsint
Minimum number of counts for each chunk. Optional (but needs min_samples
if left unspecified). Only makes sense if the series has a counts attribute and
it is evenly sampled.
min_samplesint
Minimum number of time bins. Optional (but needs min_counts if left unspecified).
even_samplingbool
Force the treatment of the data as evenly sampled or not. If None, the data are
considered evenly sampled if self.dt is larger than zero and the median
separation between subsequent times is within 1% of self.dt.
This method is only appropriate for very short bad time intervals. The simulated data
are basically white noise, so they are able to alter the statistical properties of
variable data. For very short gaps in the data, the effect of these small
injections of white noise should be negligible. How short depends on the single case,
the user is urged not to use the method as a black box and make simulations to measure
its effect. If you have long bad time intervals, you should use more advanced
techniques, not currently available in Stingray for this use case, such as Gaussian
Processes. In particular, please verify that the values of max_length and
buffer_size are adequate to your case.
To fill the gaps in all but the time points (i.e., flux measures, energies), we take the
buffer_size (by default, the largest value between 100 and the estimated samples in
a max_length-long gap) valid data points closest to the gap and repeat them randomly
with the same empirical statistical distribution. So, if the my_fancy_attr attribute, in
the 100 points of the buffer, has 30 times 10, 10 times 9, and 60 times 11, there will be
on average 30% of 10, 60% of 11, and 10% of 9 in the simulated data.
Times are treated differently depending on the fact that the time series is evenly
sampled or not. If it is not, the times are simulated from a uniform distribution with the
same count rate found in the buffer. Otherwise, times just follow the same grid used
inside GTIs. Using the evenly sampled or not is decided based on the even_sampling
parameter. If left to None, the time series is considered evenly sampled if
self.dt is greater than zero and the median separation between subsequent times is
within 1% of the time resolution.
Other Parameters:
max_lengthfloat
Maximum length of a bad time interval to be filled. If None, the criterion is bad
time intervals shorter than 1/100th of the longest good time interval.
attrs_to_randomizelist of str, default None
List of array_attrs to randomize. IfNone, all array_attrs are randomized.
It should not include time and _mask, which are treated separately.
buffer_sizeint, default 100
Number of good data points to use to calculate the means and variance the random data
on each side of the bad time interval
even_samplingbool, default None
Force the treatment of the data as evenly sampled or not. If None, the data are
considered evenly sampled if self.dt is larger than zero and the median
separation between subsequent times is within 1% of self.dt.
seedint, default None
Random seed to use for the simulation. If None, a random seed is generated.
The timeseries has to define at least a column called time,
the rest of columns will form the array attributes of the
new time series, while the attributes in table.meta will
form the new meta attributes of the time series.
Standard attributes such as pi and energy remain None if they are None
in both. Otherwise, np.nan is used as a default value for the missing values.
Arbitrary array attributes are created and joined using the same convention.
Multiple checks are done on the joined time series. If the time array of the series
being joined is empty, it is ignored. If the time resolution is different, the final
time series will have the rougher time resolution. If the MJDREF is different, the time
reference will be changed to the one of the first time series. An empty time series will
be ignored.
Note: join is not equivalent to concatenate. concatenate is used to join
multiple non-overlapping time series along the time axis, while join is more
general, and can be used to join multiple time series with different strategies (see
parameter strategy below).
The other StingrayTimeseries object which is supposed to be joined with.
If other is a list, it is assumed to be a list of StingrayTimeseries
and they are all joined, one by one.
Method to use to merge the GTIs. If “intersection”, the GTIs are merged
using the intersection of the GTIs. If “union”, the GTIs are merged
using the union of the GTIs. If “none”, a single GTI with the minimum and
the maximum time stamps of all GTIs is returned. If “infer”, the strategy
is decided based on the GTIs. If there are no overlaps, “union” is used,
otherwise “intersection” is used. If “append”, the GTIs are simply appended
but they must be mutually exclusive.
Plot the time series object on a graph self.time on x-axis and
self.counts on y-axis with self.counts_err optionally
as error bars.
Parameters:
attr: str
Attribute to plot.
Other Parameters:
witherrors: boolean, default False
Whether to plot the StingrayTimeseries with errorbars or not
labelsiterable, default None
A list or tuple with xlabel and ylabel as strings. E.g.
if the attribute is 'counts', the list of labels
could be ['Time(s)','Counts(s^-1)']
axmatplotlib.pyplot.axis object
Axis to be used for plotting. Defaults to creating a new one.
axis_limitslist, tuple, string, default None
Parameter to set axis properties of the matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for the``matplotlib.pyplot.axis()`` method.
titlestr, default None
The title of the plot.
markerstr, default ‘-’
Line style and color of the plot. Line styles and colors are
combined in a single format string, as in 'bo' for blue
circles. See matplotlib.pyplot.plot for more options.
saveboolean, optional, default False
If True, save the figure with specified filename.
filenamestr
File name of the image to save. Depends on the boolean save.
plot_btisbool
Plot the bad time intervals as red areas on the plot
Rebin the time series to a new time resolution. While the new
resolution need not be an integer multiple of the previous time
resolution, be aware that if it is not, the last bin will be cut
off by the fraction left over by the integer division.
Parameters:
dt_new: float
The new time resolution of the time series. Must be larger than
the time resolution of the old time series!
A StingrayTimeseries can be sorted in either increasing or decreasing order
using this method. The time array gets sorted and the counts array is
changed accordingly.
Parameters:
reverseboolean, default False
If True then the object is sorted in reverse order.
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Split the current StingrayTimeseries object into a list of
StingrayTimeseries objects, one for each continuous GTI segment
as defined in the gti attribute.
Parameters:
min_pointsint, default 1
The minimum number of data points in each time series. Light
curves with fewer data points will be ignored.
This method takes a start and a stop point (either as indices,
or as times in the same unit as those in the time attribute, and truncates
all bins before start and after stop, then returns a new
StingrayTimeseries object with the truncated time series.
Parameters:
startint, default 0
Index (or time stamp) of the starting point of the truncation. If no value is set
for the start point, then all points from the first element in the time array
are taken into account.
stopint, default None
Index (or time stamp) of the ending point (exclusive) of the truncation. If no
value of stop is set, then points including the last point in
the counts array are taken in count.
method{index | time}, optional, default index
Type of the start and stop values. If set to index then
the values are treated as indices of the counts array, or
if set to time, the values are treated as actual time values.
>>> time=[1,2,3,4,5,6,7,8,9]>>> count=[10,20,30,40,50,60,70,80,90]>>> ts=StingrayTimeseries(time,array_attrs={"counts":count},dt=1)>>> ts_new=ts.truncate(start=2,stop=8)>>> assertnp.allclose(ts_new.counts,[30,40,50,60,70,80])>>> assertnp.allclose(ts_new.time,[3,4,5,6,7,8])>>> # Truncation can also be done by time values>>> ts_new=ts.truncate(start=6,method='time')>>> assertnp.allclose(ts_new.time,[6,7,8,9])>>> assertnp.allclose(ts_new.counts,[60,70,80,90])
Make a light curve object from an array of time stamps and an
array of counts.
Parameters:
time: Iterable, `:class:astropy.time.Time`, or `:class:astropy.units.Quantity` object
A list or array of time stamps for a light curve. Must be a type that
can be cast to :class:np.array or :class:List of floats, or that
has a value attribute that does (e.g. a
:class:astropy.units.Quantity or :class:astropy.time.Time object).
counts: iterable, optional, default ``None``
A list or array of the counts in each bin corresponding to the
bins defined in time (note: use input_counts=False to
input the count range, i.e. counts/second, otherwise use
counts/bin).
err: iterable, optional, default ``None``
A list or array of the uncertainties in each bin corresponding to
the bins defined in time (note: use input_counts=False to
input the count rage, i.e. counts/second, otherwise use
counts/bin). If None, we assume the data is poisson distributed
and calculate the error from the average of the lower and upper
1-sigma confidence intervals for the Poissonian distribution with
mean equal to counts.
input_counts: bool, optional, default True
If True, the code assumes that the input data in counts
is in units of counts/bin. If False, it assumes the data
in counts is in counts/second.
gti: 2-d float array, default ``None``
[[gti0_0,gti0_1],[gti1_0,gti1_1],...]
Good Time Intervals. They are not applied to the data by default.
They will be used by other methods to have an indication of the
“safe” time intervals to use during analysis.
err_dist: str, optional, default ``None``
Statistical distribution used to calculate the
uncertainties and other statistical values appropriately.
Default makes no assumptions and keep errors equal to zero.
bg_counts: iterable,`:class:numpy.array` or `:class:List` of floats, optional, default ``None``
A list or array of background counts detected in the background extraction region
in each bin corresponding to the bins defined in time.
bg_ratio: iterable, `:class:numpy.array` or `:class:List` of floats, optional, default ``None``
A list or array of source region area to background region area ratio in each bin. These are
factors by which the bg_counts should be scaled to estimate background counts within the
source aperture.
frac_exp: iterable, `:class:numpy.array` or `:class:List` of floats, optional, default ``None``
A list or array of fractional exposers in each bin.
mjdref: float
MJD reference (useful in most high-energy mission data)
dt: float or array of floats. Default median(diff(time))
Time resolution of the light curve. Can be an array of the same dimension
as time specifying width of each bin.
skip_checks: bool
If True, the user specifies that data are already sorted and contain no
infinite or nan points. Use at your own risk
low_memory: bool
If True, all the lazily evaluated attribute (e.g., countrate and
countrate_err if input_counts is True) will _not_ be stored in memory,
but calculated every time they are requested.
missionstr
Mission that recorded the data (e.g. NICER)
instrstr
Instrument onboard the mission
headerstr
The full header of the original FITS file, if relevant
**other_kw
Used internally. Any other keyword arguments will be ignored
Attributes:
time: numpy.ndarray
The array of midpoints of time bins.
bin_lo: numpy.ndarray
The array of lower time stamp of time bins.
bin_hi: numpy.ndarray
The array of higher time stamp of time bins.
counts: numpy.ndarray
The counts per bin corresponding to the bins in time.
counts_err: numpy.ndarray
The uncertainties corresponding to counts
bg_counts: numpy.ndarray
The background counts corresponding to the bins in time.
bg_ratio: numpy.ndarray
The ratio of source region area to background region area corresponding to each bin.
frac_exp: numpy.ndarray
The fractional exposers in each bin.
countrate: numpy.ndarray
The counts per second in each of the bins defined in time.
countrate_err: numpy.ndarray
The uncertainties corresponding to countrate
meanrate: float
The mean count rate of the light curve.
meancounts: float
The mean counts of the light curve.
n: int
The number of data points in the light curve.
dt: float or array of floats
The time resolution of the light curve.
mjdref: float
MJD reference date (tstart / 86400 gives the date in MJD at the
start of the observation)
tseg: float
The total duration of the light curve.
tstart: float
The start time of the light curve.
gti: 2-d float array
[[gti0_0,gti0_1],[gti1_0,gti1_1],...]
Good Time Intervals. They indicate the “safe” time intervals
to be used during the analysis of the light curve.
err_dist: string
Statistic of the Lightcurve, it is used to calculate the
uncertainties and other statistical values appropriately.
It propagates to Spectrum classes.
missionstr
Mission that recorded the data (e.g. NICER)
instrstr
Instrument onboard the mission
detector_iditerable
The detector that recoded each photon, if relevant (e.g. XMM, Chandra)
headerstr
The full header of the original FITS file, if relevant
Analyze segments of the light curve with any function.
Deprecated since version 2.0: Use Lightcurve.analyze_segments(func,segment_size)() instead.
Parameters:
segment_sizefloat
Length in seconds of the light curve segments
funcfunction
Function accepting a Lightcurve object as single argument, plus
possible additional keyword arguments, and returning a number or a
tuple - e.g., (result,error) where both result and error are
numbers.
Returns:
start_timesarray
Lower time boundaries of all time segments.
stop_timesarray
upper time boundaries of all segments.
resultarray of N elements
The result of func for each segment of the light curve
Other Parameters:
fraction_stepfloat
By default, segments do not overlap (fraction_step = 1). If fraction_step < 1,
then the start points of consecutive segments are fraction_step*segment_size
apart, and consecutive segments overlap. For example, for fraction_step = 0.5,
the window shifts one half of segment_size)
kwargskeyword arguments
These additional keyword arguments, if present, they will be passed
to func
Apply GTIs to a light curve. Filters the time, counts,
countrate, counts_err and countrate_err arrays for all bins
that fall into Good Time Intervals and recalculates mean countrate
and the number of bins.
Parameters:
inplacebool
If True, overwrite the current light curve. Otherwise, return a new one.
Calculate the baseline of the light curve, accounting for GTIs.
Parameters:
lamfloat
“smoothness” parameter. Larger values make the baseline stiffer
Typically 1e2<lam<1e9
pfloat
“asymmetry” parameter. Smaller values make the baseline more
“horizontal”. Typically 0.001<p<0.1, but not necessary.
Returns:
baselinenumpy.ndarray
An array with the baseline of the light curve
Other Parameters:
offset_correctionbool, default False
by default, this method does not align to the running mean of the
light curve, but it goes below the light curve. Setting align to
True, an additional step is done to shift the baseline so that it
is shifted to the middle of the light curve noise distribution.
Finds posterior samples of Bayesian excess variance (bexvar) for the light curve.
It requires source counts in counts and time intervals for each bin.
If the dt is an array then uses its elements as time intervals
for each bin. If dt is float, it calculates the time intervals by assuming
all intervals to be equal to dt.
Returns:
lc_bexvariterable, :class:numpy.array of floats
An array of posterior samples of Bayesian excess variance (bexvar).
Estimate a reasonable segment length for segment-by-segment analysis.
The user has to specify a criterion based on a minimum number of counts (if
the time series has a counts attribute) or a minimum number of time samples.
At least one between min_counts and min_samples must be specified.
Returns:
segment_sizefloat
The length of the light curve chunks that satisfies the conditions
Other Parameters:
min_countsint
Minimum number of counts for each chunk. Optional (but needs min_samples
if left unspecified). Only makes sense if the series has a counts attribute and
it is evenly sampled.
min_samplesint
Minimum number of time bins. Optional (but needs min_counts if left unspecified).
even_samplingbool
Force the treatment of the data as evenly sampled or not. If None, the data are
considered evenly sampled if self.dt is larger than zero and the median
separation between subsequent times is within 1% of self.dt.
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
The timeseries has to define at least a column called time,
the rest of columns will form the array attributes of the
new time series, while the attributes in table.meta will
form the new meta attributes of the time series.
The new Lightcurve object will contain time stamps from both the
objects. The counts and countrate attributes in the resulting object
will contain the union of the non-overlapping parts of the two individual objects,
or the average in case of overlapping time arrays of both Lightcurve objects.
Good Time Intervals are also joined.
Note : Ideally, the time array of both lightcurves should not overlap.
Make a light curve out of photon arrival times, with a given time resolution dt.
Note that dt should be larger than the native time resolution of the instrument
that has taken the data.
Parameters:
toa: iterable
list of photon arrival times
dt: float
time resolution of the light curve (the bin width)
tseg: float, optional, default ``None``
The total duration of the light curve.
If this is None, then the total duration of the light curve will
be the interval between the arrival between either the first and the last
gti boundary or, if gti is not set, the first and the last photon in toa.
Note: If tseg is not divisible by dt (i.e. if tseg/dt is
not an integer number), then the last fractional bin will be
dropped!
tstart: float, optional, default ``None``
The start time of the light curve.
If this is None, either the first gti boundary or, if not available,
the arrival time of the first photon will be used
as the start time of the light curve.
gti: 2-d float array
[[gti0_0,gti0_1],[gti1_0,gti1_1],...]
Good Time Intervals
use_histbool
Use np.histogram instead of np.bincounts. Might be advantageous
for very short datasets.
Plot the light curve object on a graph self.time on x-axis and
self.counts on y-axis with self.counts_err optionally
as error bars.
Parameters:
witherrors: boolean, default False
Whether to plot the Lightcurve with errorbars or not
labelsiterable, default None
A list of tuple with xlabel and ylabel as strings.
axis_limitslist, tuple, string, default None
Parameter to set axis properties of the matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for the``matplotlib.pyplot.axis()`` method.
axislist, tuple, string, default None
Deprecated in favor of axis_limits, same functionality.
titlestr, default None
The title of the plot.
markerstr, default ‘-’
Line style and color of the plot. Line styles and colors are
combined in a single format string, as in 'bo' for blue
circles. See matplotlib.pyplot.plot for more options.
saveboolean, optional, default False
If True, save the figure with specified filename.
filenamestr
File name of the image to save. Depends on the boolean save.
axmatplotlib.pyplot.axis object
Axis to be used for plotting. Defaults to creating a new one.
plot_btisbool
Plot the bad time intervals as red areas on the plot
hea : FITS Light curves from HEASARC-supported missions.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
Files that need the astropy.table.Table interface MUST contain
at least a time column and a counts or countrate column.
The default ascii format is enhanced CSV (ECSV). Data formats
supporting the serialization of metadata (such as ECSV and HDF5) can
contain all lightcurve attributes such as dt, gti, etc with
no significant loss of information. Other file formats might lose part
of the metadata, so must be used with care.
Parameters:
filename: str
Path and file name for the file to be read.
fmt: str
Available options are ‘pickle’, ‘hea’, and any Table-supported
format such as ‘hdf5’, ‘ascii.ecsv’, etc.
Default error distribution if not specified in the file (e.g. for
ASCII files). The default is ‘gauss’ just because it is likely
that people using ASCII light curves will want to specify Gaussian
error bars, if any.
Rebin the light curve to a new time resolution. While the new
resolution need not be an integer multiple of the previous time
resolution, be aware that if it is not, the last bin will be cut
off by the fraction left over by the integer division.
Parameters:
dt_new: float
The new time resolution of the light curve. Must be larger than
the time resolution of the old light curve!
A Lightcurve can be sorted in either increasing or decreasing order
using this method. The time array gets sorted and the counts array is
changed accordingly.
Parameters:
reverseboolean, default False
If True then the object is sorted in reverse order.
inplacebool
If True, overwrite the current light curve. Otherwise, return a new one.
Sort a Lightcurve object in accordance with its counts array.
A Lightcurve can be sorted in either increasing or decreasing order
using this method. The counts array gets sorted and the time array is
changed accordingly.
Parameters:
reverseboolean, default False
If True then the object is sorted in reverse order.
inplacebool
If True, overwrite the current light curve. Otherwise, return a new one.
For data with gaps, it can sometimes be useful to be able to split
the light curve into separate, evenly sampled objects along those
data gaps. This method allows to do this: it finds data gaps of a
specified minimum size, and produces a list of new Lightcurve
objects for each contiguous segment.
Parameters:
min_gapfloat
The length of a data gap, in the same units as the time attribute
of the Lightcurve object. Any smaller gaps will be ignored, any
larger gaps will be identified and used to split the light curve.
min_pointsint, default 1
The minimum number of data points in each light curve. Light
curves with fewer data points will be ignored.
The time array and all the array attributes become columns. The meta attributes become
metadata of the astropy.table.Table object.
Other Parameters:
no_longdoublebool, default False
If True, the data are converted to double precision before being saved.
This is useful, e.g., for saving to FITS files, which do not support long double precision.
The time array and all the array attributes become columns. The meta attributes become
metadata of the astropy.timeseries.TimeSeries object.
The time array is saved as a TimeDelta object.
Other Parameters:
no_longdoublebool, default False
If True, the data are converted to double precision before being saved.
This is useful, e.g., for saving to FITS files, which do not support long double precision.
Returns a lightkurve.LightCurve object.
This feature requires Lightkurve to be installed
(e.g. pipinstalllightkurve). An ImportError will
be raised if this package is not available.
This method takes a start and a stop point (either as indices,
or as times in the same unit as those in the time attribute, and truncates
all bins before start and after stop, then returns a new Lightcurve
object with the truncated light curve.
Parameters:
startint, default 0
Index (or time stamp) of the starting point of the truncation. If no value is set
for the start point, then all points from the first element in the time array
are taken into account.
stopint, default None
Index (or time stamp) of the ending point (exclusive) of the truncation. If no
value of stop is set, then points including the last point in
the counts array are taken in count.
method{index | time}, optional, default index
Type of the start and stop values. If set to index then
the values are treated as indices of the counts array, or
if set to time, the values are treated as actual time values.
Basic class for event list data. Event lists generally correspond to individual events (e.g. photons)
recorded by the detector, and their associated properties. For X-ray data where this type commonly occurs,
events are time stamps of when a photon arrived in the detector, and (optionally) the photon energy associated
with the event.
Parameters:
time: iterable
A list or array of time stamps
Other Parameters:
dt: float
The time resolution of the events. Only relevant when using events
to produce light curves with similar bin time.
energy: iterable
A list of array of photon energy values in keV
mjdreffloat
The MJD used as a reference for the time array.
ncounts: int
Number of desired data points in event list. Deprecated
gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Good Time Intervals
piinteger, numpy.ndarray
PI channels
notesstr
Any useful annotations
high_precisionbool
Change the precision of self.time to float128. Useful while dealing with fast pulsars.
missionstr
Mission that recorded the data (e.g. NICER)
instrstr
Instrument onboard the mission
headerstr
The full header of the original FITS file, if relevant
detector_iditerable
The detector that recorded each photon (if the instrument has more than
one, e.g. XMM/EPIC-pn)
timerefstr
The time reference, as recorded in the FITS file (e.g. SOLARSYSTEM)
timesysstr
The time system, as recorded in the FITS file (e.g. TDB)
ephemstr
The JPL ephemeris used to barycenter the data, if any (e.g. DE430)
rmf_filestr, default None
The file name of the RMF file to use for calibration.
skip_checksbool, default False
Skip checks for the validity of the event list. Use with caution.
**other_kw
Used internally. Any other keyword arguments will be ignored
Attributes:
time: numpy.ndarray
The array of event arrival times, in seconds from the reference
MJD defined in mjdref
energy: numpy.ndarray
The array of photon energy values
ncounts: int
The number of data points in the event list
dt: float
The time resolution of the events. Only relevant when using events
to produce light curves with similar bin time.
mjdreffloat
The MJD used as a reference for the time array.
gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Good Time Intervals
piinteger, numpy.ndarray
PI channels
high_precisionbool
Change the precision of self.time to float128. Useful while dealing with fast pulsars.
missionstr
Mission that recorded the data (e.g. NICER)
instrstr
Instrument onboard the mission
detector_iditerable
The detector that recoded each photon, if relevant (e.g. XMM, Chandra)
headerstr
The full header of the original FITS file, if relevant
If both are empty, an empty EventList is returned.
GTIs are crossed if the event lists are over a common time interval,
and appended otherwise.
Standard attributes such as pi and energy remain None if they are None
in both. Otherwise, np.nan is used as a default value for the EventList where
they were None. Arbitrary attributes (e.g., Stokes parameters in polarimetric data) are
created and joined using the same convention.
Multiple checks are done on the joined event lists. If the time array of the event list
being joined is empty, it is ignored. If the time resolution is different, the final
event list will have the rougher time resolution. If the MJDREF is different, the time
reference will be changed to the one of the first event list. An empty event list will
be ignored.
The other EventList object which is supposed to be joined with.
If other is a list, it is assumed to be a list of EventList objects
and they are all joined, one by one.
Method to use to merge the GTIs. If “intersection”, the GTIs are merged
using the intersection of the GTIs. If “union”, the GTIs are merged
using the union of the GTIs. If “none”, a single GTI with the minimum and
the maximum time stamps of all GTIs is returned. If “infer”, the strategy
is decided based on the GTIs. If there are no overlaps, “union” is used,
otherwise “intersection” is used. If “append”, the GTIs are simply appended
but they must be mutually exclusive.
hea or ogip : FITS Event files from (well, some) HEASARC-supported missions.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
Files that need the astropy.table.Table interface MUST contain
at least a time column. Other recognized columns are energy and
pi.
The default ascii format is enhanced CSV (ECSV). Data formats
supporting the serialization of metadata (such as ECSV and HDF5) can
contain all eventlist attributes such as mission, gti, etc with
no significant loss of information. Other file formats might lose part
of the metadata, so must be used with care.
Parameters:
filename: str
Path and file name for the file to be read.
fmt: str
Available options are ‘pickle’, ‘hea’, and any Table-supported
format such as ‘hdf5’, ‘ascii.ecsv’, etc.
The file name of the RMF file to use for energy calibration. Defaults to
None, which implies no channel->energy conversion at this stage (or a default
calibration applied to selected missions).
kwargsdict
Any further keyword arguments to be passed to load_events_and_gtis
for reading in event lists in OGIP/HEASOFT format
Assign (simulate) energies to event list from a spectrum.
Parameters:
spectrum: 2-d array or list [energies, spectrum]
Energies versus corresponding fluxes. The 2-d array or list must
have energies across the first dimension and fluxes across the
second one. If the dimension of the energies is the same as
spectrum, they are interpreted as bin centers.
If it is longer by one, they are interpreted as proper bin edges
(similarly to the bins of np.histogram).
Note that for non-uniformly binned spectra, it is advisable to pass
the exact edges.
The result will be something similar to a light curve, but with arbitrary
attributes corresponding to a weighted sum of each specified attribute of
the event list.
E.g. if the event list has a q attribute, the final time series will
have a q attribute, which is the sum of all q values in each time bin.
These classes implement commonly used Fourier analysis products, most importantly Crossspectrum and
Powerspectrum, along with the variants for averaged cross/power spectra.
Compute the classical significances for the powers in the power
spectrum, assuming an underlying noise distribution that follows a
chi-square distributions with 2M degrees of freedom, where M is the
number of powers averaged in each bin.
Note that this function will only produce correct results when the
following underlying assumptions are fulfilled:
The power spectrum is Leahy-normalized
There is no source of variability in the data other than the
periodic signal to be determined with this method. This is important!
If there are other sources of (aperiodic) variability in the data, this
method will not produce correct results, but instead produce a large
number of spurious false positive detections!
There are no significant instrumental effects changing the
statistical distribution of the powers (e.g. pile-up or dead time)
By default, the method produces (index,p-values) for all powers in
the power spectrum, where index is the numerical index of the power in
question. If a threshold is set, then only powers with p-values
below that threshold with their respective indices. If
trial_correction is set to True, then the threshold will be corrected
for the number of trials (frequencies) in the power spectrum before
being used.
Parameters:
thresholdfloat, optional, default 1
The threshold to be used when reporting p-values of potentially
significant powers. Must be between 0 and 1.
Default is 1 (all p-values will be reported).
trial_correctionbool, optional, default False
A Boolean flag that sets whether the threshold will be corrected
by the number of frequencies before being applied. This decreases
the threshold (p-values need to be lower to count as significant).
Default is False (report all powers) though for any application
where threshold` is set to something meaningful, this should also
be applied!
Returns:
pvalsiterable
A list of (index,p-value) tuples for all powers that have p-values
lower than the threshold specified in threshold.
Coherence is defined in Vaughan and Nowak, 1996 [1].
It is a Fourier frequency dependent measure of the linear correlation
between time series measured simultaneously in two energy channels.
This correction is based on the formula given in Zhang et al. 2015, assuming
a constant dead time for all events.
For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad
Detected background count rate. This is important to estimate when deadtime is given by the
combination of the source counts and background counts (e.g. in an imaging X-ray detector).
paralyzable: bool, default False
If True, the dead time correction is done assuming a paralyzable
dead time. If False, the correction is done assuming a non-paralyzable
(more common) dead time.
limit_kint, default 200
Limit to this value the number of terms in the inner loops of
calculations. Check the plots returned by the check_B and
check_A functions to test that this number is adequate.
n_approxint, default None
Number of bins to calculate the model power spectrum. If None, it will use the size of
the input frequency array. Relatively simple models (e.g., low count rates compared to
dead time) can use a smaller number of bins to speed up the calculation, and the final
power values will be interpolated.
Calculate AveragedCrossspectrum from two event lists
Parameters:
events1stingray.EventList
Events from channel 1
events2stingray.EventList
Events from channel 2
dtfloat
The time resolution of the intermediate light curves
(sets the Nyquist frequency)
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
Light curves from channel 2. If arrays, use them as counts
dtfloat
The time resolution of the light curves
(sets the Nyquist frequency)
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
save_allbool, default False
If True, save the cross spectrum of each segment in the cs_all
attribute of the output Crossspectrum object.
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
What attribute of the time series will be used as error bar.
segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
Calculate AveragedCrossspectrum from two arrays of event times.
Parameters:
times1np.array
Event arrival times of channel 1
times2np.array
Event arrival times of channel 2
dtfloat
The time resolution of the intermediate light curves
(sets the Nyquist frequency)
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be
averaged. Only relevant (and required) for AveragedCrossspectrum.
gti[[gti0, gti1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
If norm is not a string, raise a TypeError
>>> Crossspectrum.initial_checks(c, norm=1)
Traceback (most recent call last):
…
TypeError: norm must be a string…
If norm is not one of the valid norms, raise a ValueError
>>> Crossspectrum.initial_checks(c, norm=”blabla”)
Traceback (most recent call last):
…
ValueError: norm must be ‘frac’…
If power_type is not one of the valid norms, raise a ValueError
>>> Crossspectrum.initial_checks(c, power_type=”blabla”)
Traceback (most recent call last):
…
ValueError: power_type not recognized!
If the user passes only one light curve, raise a ValueError
>>> Crossspectrum.initial_checks(c,data1=lc1,data2=None)Traceback (most recent call last):...ValueError: You can't do a cross spectrum...
If the user passes an event list without dt, raise a ValueError
>>> Crossspectrum.initial_checks(c,data1=ev1,data2=ev2,dt=None)Traceback (most recent call last):...ValueError: If using event lists, please specify...
Calculate the fourier phase lag of the cross spectrum.
This is defined as the argument of the complex cross spectrum, and gives
the delay at all frequencies, in cycles, of one input light curve with respect
to the other.
Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.
Parameters:
labelsiterable, default None
A list of tuple with xlabel and ylabel as strings.
axislist, tuple, string, default None
Parameter to set axis properties of the matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for the``matplotlib.pyplot.axis()`` method.
titlestr, default None
The title of the plot.
markerstr, default ‘-’
Line style and color of the plot. Line styles and colors are
combined in a single format string, as in 'bo' for blue
circles. See matplotlib.pyplot.plot for more options.
saveboolean, optional, default False
If True, save the figure with specified filename.
filenamestr
File name of the image to save. Depends on the boolean save.
axmatplotlib.Axes object
An axes object to fill with the cross correlation plot.
Rebin the cross spectrum to a new frequency resolution df.
Parameters:
df: float
The new frequency resolution
Returns:
bin_cs = Crossspectrum (or one of its subclasses) object
The newly binned cross spectrum or power spectrum.
Note: this object will be of the same type as the object
that called this method. For example, if this method is called
from AveragedPowerspectrum, it will return an object of class
AveragedPowerspectrum, too.
Other Parameters:
f: float
the rebin factor. If specified, it substitutes df with f*self.df
Logarithmic rebin of the periodogram.
The new frequency depends on the previous frequency
modified by a factor f:
\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``
parameter that steers the frequency resolution
Returns:
new_specCrossspectrum (or one of its subclasses) object
The newly binned cross spectrum or power spectrum.
Note: this object will be of the same type as the object
that called this method. For example, if this method is called
from AveragedPowerspectrum, it will return an object of class
Make a cross spectrum from a (binned) light curve.
You can also make an empty Crossspectrum object to populate with your
own Fourier-transformed data (this can sometimes be useful when making
binned power spectra). Stingray uses the scipy.fft standards for the sign
of the Nyquist frequency.
Parameters:
data1: :class:`stingray.Lightcurve` or :class:`stingray.events.EventList`, optional, default ``None``
The dataset for the first channel/band of interest.
data2: :class:`stingray.Lightcurve` or :class:`stingray.events.EventList`, optional, default ``None``
The normalization of the (real part of the) cross spectrum.
power_type: string, optional, default ``real``
Parameter to choose among complete, real part and magnitude of the cross spectrum.
fullspec: boolean, optional, default ``False``
If False, keep only the positive frequencies, or if True, keep all of them .
Other Parameters:
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the input
Lightcurve GTIs! If you’re getting errors regarding your GTIs, don’t
use this and only give GTIs to the Lightcurve objects before making
the cross spectrum.
lc1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects
The time resolution of the light curve. Only needed when constructing
light curves in the case where data1, data2 are
EventList objects
skip_checks: bool
Skip initial checks, for speed or other reasons (you need to trust your
inputs!)
Attributes:
freq: numpy.ndarray
The array of mid-bin frequencies that the Fourier transform samples
power: numpy.ndarray
The array of cross spectra (complex numbers)
power_err: numpy.ndarray
The uncertainties of power.
An approximation for each bin given by power_err=power/sqrt(m).
Where m is the number of power averaged in each bin (by frequency
binning, or averaging more than one spectra). Note that for a single
realization (m=1) the error is equal to the power.
df: float
The frequency resolution
m: int
The number of averaged cross-spectra amplitudes in each bin.
n: int
The number of data points/time bins in one segment of the light
curves.
k: array of int
The rebinning scheme if the object has been rebinned otherwise is set to 1.
Initialize the class, trying to understand the input types.
The input arguments are the same as __init__(). Based on the type
of data, this method will call the appropriate
powerspectrum_from_XXXX function, and initialize self with
the correct attributes.
Helper method to codify an operation of one time series with another (e.g. add, subtract).
Takes into account the GTIs, and returns a new StingrayTimeseries object.
An operation between the StingrayTimeseries object calling this method, and
other, operating on all the specified array attributes.
Returns:
ts_newStingrayTimeseries object
The new time series calculated in operation
Other Parameters:
operated_attrslist of str or None
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
The function used for error propagation. Defaults to the sum of squares.
Compute the error on the fractional rms amplitude using error
propagation.
Note: this uses the actual measured powers, which is not
strictly correct. We should be using the underlying power spectrum,
but in the absence of an estimate of that, this will have to do.
\[r = \sqrt{P}\]
\[\begin{split}\delta r = \\frac{1}{2 * \sqrt{P}} \delta P\end{split}\]
Parameters:
powers: iterable
The list of powers used to compute the fractional rms amplitude.
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Apply a mask to all array attributes of the object
Parameters:
maskarray of bool
The mask. Has to be of the same length as self.time
Returns:
ts_newStingrayObject object
The new object with the mask applied if inplace is False, otherwise the
same object.
Other Parameters:
inplacebool
If True, overwrite the current object. Otherwise, return a new one.
filtered_attrslist of str or None
Array attributes to be filtered. Defaults to all array attributes if None.
The other array attributes will be set to None. The main array attr is always
included.
Compute the classical significances for the powers in the power
spectrum, assuming an underlying noise distribution that follows a
chi-square distributions with 2M degrees of freedom, where M is the
number of powers averaged in each bin.
Note that this function will only produce correct results when the
following underlying assumptions are fulfilled:
The power spectrum is Leahy-normalized
There is no source of variability in the data other than the
periodic signal to be determined with this method. This is
important! If there are other sources of (aperiodic) variability in
the data, this method will not produce correct results, but
instead produce a large number of spurious false positive
detections!
There are no significant instrumental effects changing the
statistical distribution of the powers (e.g. pile-up or dead time)
By default, the method produces (index,p-values) for all powers in
the power spectrum, where index is the numerical index of the power in
question. If a threshold is set, then only powers with p-values
below that threshold with their respective indices. If
trial_correction is set to True, then the threshold will be
corrected for the number of trials (frequencies) in the power spectrum
before being used.
Parameters:
thresholdfloat, optional, default 1
The threshold to be used when reporting p-values of potentially
significant powers. Must be between 0 and 1.
Default is 1 (all p-values will be reported).
trial_correctionbool, optional, default False
A Boolean flag that sets whether the threshold will be
corrected by the number of frequencies before being applied. This
decreases the threshold (p-values need to be lower to count as
significant). Default is False (report all powers) though for
any application where threshold` is set to something meaningful,
this should also be applied!
Returns:
pvalsiterable
A list of (p-value,index) tuples for all powers that have
p-values lower than the threshold specified in threshold.
Coherence is defined in Vaughan and Nowak, 1996 [3].
It is a Fourier frequency dependent measure of the linear correlation
between time series measured simultaneously in two energy channels.
Compute the fractional rms amplitude in the power spectrum
between two frequencies.
Parameters:
min_freq: float
The lower frequency bound for the calculation.
max_freq: float
The upper frequency bound for the calculation.
Returns:
rms: float
The fractional rms amplitude contained between min_freq and
max_freq.
rms_err: float
The error on the fractional rms amplitude.
Other Parameters:
poisson_noise_levelfloat, default is None
This is the Poisson noise level of the PDS with same
normalization as the PDS. If poissoin_noise_level is None,
the Poisson noise is calculated in the idealcase
e.g. 2./<countrate> for fractional rms normalisation
Dead time and other instrumental effects can alter it.
The user can fit the Poisson noise level outside
this function using the same normalisation of the PDS
and it will get subtracted from powers here.
Clean up the list of attributes, only giving out those pointing to data.
List all the attributes that point directly to valid data. This method goes through all the
attributes of the class, eliminating methods, properties, and attributes that are complicated
to serialize such as other StingrayObject, or arrays of objects.
This function does not make difference between array-like data and scalar data.
Returns:
data_attributeslist of str
List of attributes pointing to data that are not methods, properties,
or other StingrayObject instances.
This correction is based on the formula given in Zhang et al. 2015, assuming
a constant dead time for all events.
For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad
Detected background count rate. This is important to estimate when deadtime is given by the
combination of the source counts and background counts (e.g. in an imaging X-ray detector).
paralyzable: bool, default False
If True, the dead time correction is done assuming a paralyzable
dead time. If False, the correction is done assuming a non-paralyzable
(more common) dead time.
limit_kint, default 200
Limit to this value the number of terms in the inner loops of
calculations. Check the plots returned by the check_B and
check_A functions to test that this number is adequate.
n_approxint, default None
Number of bins to calculate the model power spectrum. If None, it will use the size of
the input frequency array. Relatively simple models (e.g., low count rates compared to
dead time) can use a smaller number of bins to speed up the calculation, and the final
power values will be interpolated.
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Calculate an average power spectrum from an event list.
Parameters:
eventsstingray.EventList
Event list to be analyzed.
dtfloat
The time resolution of the intermediate light curves
(sets the Nyquist frequency).
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be
averaged. Only relevant (and required) for
AveragedPowerspectrum.
gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with
the GTIs of the input object. Can cause errors if there are
overlaps between these GTIs and the input object GTIs. If that
happens, assign the desired GTIs to the input object.
normstr, default “frac”
The normalization of the periodogram. abs is absolute rms, frac
is fractional rms, leahy is Leahy+83 normalization, and none is
the unnormalized periodogram.
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or
on the full light curve. This gives different results
(Alston+2013). By default, we assume the mean is calculated on the
full light curve, but the user can set use_common_mean to False
to calculate it on a per-segment basis.
silentbool, default False
Silence the progress bars.
save_allbool, default False
Save all intermediate PDSs used for the final average.
The time resolution of the light curves
(sets the Nyquist frequency)
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be
averaged. Only relevant (and required) for
AveragedPowerspectrum.
gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with
the GTIs of the input object. Can cause errors if there are
overlaps between these GTIs and the input object GTIs. If that
happens, assign the desired GTIs to the input object.
normstr, default “frac”
The normalization of the periodogram. abs is absolute rms, frac
is fractional rms, leahy is Leahy+83 normalization, and none is
the unnormalized periodogram.
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or
on the full light curve. This gives different results
(Alston+2013). By default, we assume the mean is calculated on the
full light curve, but the user can set use_common_mean to False
to calculate it on a per-segment basis.
silentbool, default False
Silence the progress bars.
save_allbool, default False
Save all intermediate PDSs used for the final average.
The time resolution of the intermediate light curves
(sets the Nyquist frequency).
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be
averaged. Only relevant (and required) for
AveragedPowerspectrum.
gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with
the GTIs of the input object. Can cause errors if there are
overlaps between these GTIs and the input object GTIs. If that
happens, assign the desired GTIs to the input object.
normstr, default “frac”
The normalization of the periodogram. abs is absolute rms, frac
is fractional rms, leahy is Leahy+83 normalization, and none is
the unnormalized periodogram.
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or
on the full light curve. This gives different results
(Alston+2013). By default, we assume the mean is calculated on the
full light curve, but the user can set use_common_mean to False
to calculate it on a per-segment basis.
silentbool, default False
Silence the progress bars.
save_allbool, default False
Save all intermediate PDSs used for the final average.
Create an StingrayObject object from data in a pandas DataFrame.
The dataframe MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Since pandas does not support n-D data, multi-dimensional arrays can be
specified as <colname>_dimN_M_K etc.
See documentation of make_1d_arrays_into_nd for details.
What attribute of the time series will be used as error bar.
segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
silentbool, default False
Silence the progress bars
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
save_allbool, default False
Save all intermediate PDSs used for the final average.
Calculate an average power spectrum from an array of event times.
Parameters:
timesnp.array
Event arrival times.
dtfloat
The time resolution of the intermediate light curves
(sets the Nyquist frequency).
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be
averaged. Only relevant (and required) for
AveragedPowerspectrum.
gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with
the GTIs of the input object. Can cause errors if there are
overlaps between these GTIs and the input object GTIs. If that
happens, assign the desired GTIs to the input object.
normstr, default “frac”
The normalization of the periodogram. abs is absolute rms, frac
is fractional rms, leahy is Leahy+83 normalization, and none is
the unnormalized periodogram.
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or
on the full light curve. This gives different results
(Alston+2013). By default, we assume the mean is calculated on the
full light curve, but the user can set use_common_mean to False
to calculate it on a per-segment basis.
The dataset MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
If norm is not a string, raise a TypeError
>>> Crossspectrum.initial_checks(c, norm=1)
Traceback (most recent call last):
…
TypeError: norm must be a string…
If norm is not one of the valid norms, raise a ValueError
>>> Crossspectrum.initial_checks(c, norm=”blabla”)
Traceback (most recent call last):
…
ValueError: norm must be ‘frac’…
If power_type is not one of the valid norms, raise a ValueError
>>> Crossspectrum.initial_checks(c, power_type=”blabla”)
Traceback (most recent call last):
…
ValueError: power_type not recognized!
If the user passes only one light curve, raise a ValueError
>>> Crossspectrum.initial_checks(c,data1=lc1,data2=None)Traceback (most recent call last):...ValueError: You can't do a cross spectrum...
If the user passes an event list without dt, raise a ValueError
>>> Crossspectrum.initial_checks(c,data1=ev1,data2=ev2,dt=None)Traceback (most recent call last):...ValueError: If using event lists, please specify...
List the names of the internal array attributes of the Stingray Object.
These are array attributes that can be set by properties, and are generally indicated
by an underscore followed by the name of the property that links to it (E.g.
_counts in Lightcurve).
By array attributes, we mean the ones with the same size and shape as
main_array_attr (e.g. time in EventList)
The formula used to calculate the upper limit assumes the Leahy
normalization.
If the periodogram is in another normalization, we will internally
convert it to Leahy before calculating the upper limit.
Parameters:
fmin: float
The minimum frequency to search (defaults to the first nonzero bin)
fmax: float
The maximum frequency to search (defaults to the Nyquist frequency)
Returns:
a: float
The modulation amplitude that could produce P>pmeas with 1 - c
probability.
Other Parameters:
c: float
The confidence value for the upper limit (e.g. 0.95 = 95%)
Examples
>>> pds=Powerspectrum()>>> pds.norm="leahy">>> pds.freq=np.arange(0.,5.)>>> # Note: this pds has 40 as maximum value between 2 and 5 Hz>>> pds.power=np.array([100000,1,1,40,1])>>> pds.m=1>>> pds.nphots=30000>>> assertnp.isclose(... pds.modulation_upper_limit(fmin=2,fmax=5,c=0.99),... 0.10164,... atol=0.0001)
Calculate the fourier phase lag of the cross spectrum.
This is defined as the argument of the complex cross spectrum, and gives
the delay at all frequencies, in cycles, of one input light curve with respect
to the other.
Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.
Parameters:
labelsiterable, default None
A list of tuple with xlabel and ylabel as strings.
axislist, tuple, string, default None
Parameter to set axis properties of the matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for the``matplotlib.pyplot.axis()`` method.
titlestr, default None
The title of the plot.
markerstr, default ‘-’
Line style and color of the plot. Line styles and colors are
combined in a single format string, as in 'bo' for blue
circles. See matplotlib.pyplot.plot for more options.
saveboolean, optional, default False
If True, save the figure with specified filename.
filenamestr
File name of the image to save. Depends on the boolean save.
axmatplotlib.Axes object
An axes object to fill with the cross correlation plot.
Return a pretty-printed string representation of the object.
This is useful for debugging, and for interactive use.
Other Parameters:
func_to_applyfunction
A function that modifies the attributes listed in attrs_to_apply.
It must return the modified attributes and a label to be printed.
If None, no function is applied.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
Files that need the astropy.table.Table interface MUST contain
at least a column named like the main_array_attr.
The default ascii format is enhanced CSV (ECSV). Data formats
supporting the serialization of metadata (such as ECSV and HDF5) can
contain all attributes such as mission, gti, etc with
no significant loss of information. Other file formats might lose part
of the metadata, so must be used with care.
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Path and file name for the file to be read.
fmt: str
Available options are ‘pickle’, ‘hea’, and any Table-supported
format such as ‘hdf5’, ‘ascii.ecsv’, etc.
Logarithmic rebin of the periodogram.
The new frequency depends on the previous frequency
modified by a factor f:
\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``
parameter that steers the frequency resolution
Returns:
new_specCrossspectrum (or one of its subclasses) object
The newly binned cross spectrum or power spectrum.
Note: this object will be of the same type as the object
that called this method. For example, if this method is called
from AveragedPowerspectrum, it will return an object of class
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the meta dictionary.
Other Parameters:
no_longdoublebool
If True, reduce the precision of longdouble arrays to double precision.
This needs to be done in some cases, e.g. when the table is to be saved
in an architecture not supporting extended precision (e.g. ARM), but can
also be useful when an extended precision is not needed.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
Since pandas does not support n-D data, multi-dimensional arrays are
converted into columns before the conversion, with names <colname>_dimN_M_K etc.
See documentation of make_nd_into_arrays for details.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
Make a Powerspectrum (also called periodogram) from a (binned)
light curve. Periodograms can be normalized by either Leahy normalization,
fractional rms normalization, absolute rms normalization, or not at all.
You can also make an empty Powerspectrum object to populate with
your own fourier-transformed data (this can sometimes be useful when making
binned power spectra).
The normaliation of the power spectrum to be used. Options are
“leahy”, “frac”, “abs” and “none”, default is “frac”.
Other Parameters:
gti: 2-d float array
[[gti0_0,gti0_1],[gti1_0,gti1_1],...] – Good Time intervals.
This choice overrides the GTIs in the single light curves. Use with
care, especially if these GTIs have overlaps with the input
object GTIs! If you’re getting errors regarding your GTIs, don’t
use this and only give GTIs to the input object before making
the power spectrum.
skip_checks: bool
Skip initial checks, for speed or other reasons (you need to trust your
inputs!).
Attributes:
norm: {“leahy” | “frac” | “abs” | “none” }
The normalization of the power spectrum.
freq: numpy.ndarray
The array of mid-bin frequencies that the Fourier transform samples.
power: numpy.ndarray
The array of normalized squared absolute values of Fourier
amplitudes.
power_err: numpy.ndarray
The uncertainties of power.
An approximation for each bin given by power_err=power/sqrt(m).
Where m is the number of power averaged in each bin (by frequency
binning, or averaging power spectra of segments of a light curve).
Note that for a single realization (m=1) the error is equal to the
power.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Name and path of the file to save the object list to.
fmt: str
The file format to store the data in.
Available options are pickle, hdf5, ascii, fits
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Apply a mask to all array attributes of the object
Parameters:
maskarray of bool
The mask. Has to be of the same length as self.time
Returns:
ts_newStingrayObject object
The new object with the mask applied if inplace is False, otherwise the
same object.
Other Parameters:
inplacebool
If True, overwrite the current object. Otherwise, return a new one.
filtered_attrslist of str or None
Array attributes to be filtered. Defaults to all array attributes if None.
The other array attributes will be set to None. The main array attr is always
included.
Compute the classical significances for the powers in the power
spectrum, assuming an underlying noise distribution that follows a
chi-square distributions with 2M degrees of freedom, where M is the
number of powers averaged in each bin.
Note that this function will only produce correct results when the
following underlying assumptions are fulfilled:
The power spectrum is Leahy-normalized
There is no source of variability in the data other than the
periodic signal to be determined with this method. This is important!
If there are other sources of (aperiodic) variability in the data, this
method will not produce correct results, but instead produce a large
number of spurious false positive detections!
There are no significant instrumental effects changing the
statistical distribution of the powers (e.g. pile-up or dead time)
By default, the method produces (index,p-values) for all powers in
the power spectrum, where index is the numerical index of the power in
question. If a threshold is set, then only powers with p-values
below that threshold with their respective indices. If
trial_correction is set to True, then the threshold will be corrected
for the number of trials (frequencies) in the power spectrum before
being used.
Parameters:
thresholdfloat, optional, default 1
The threshold to be used when reporting p-values of potentially
significant powers. Must be between 0 and 1.
Default is 1 (all p-values will be reported).
trial_correctionbool, optional, default False
A Boolean flag that sets whether the threshold will be corrected
by the number of frequencies before being applied. This decreases
the threshold (p-values need to be lower to count as significant).
Default is False (report all powers) though for any application
where threshold` is set to something meaningful, this should also
be applied!
Returns:
pvalsiterable
A list of (index,p-value) tuples for all powers that have p-values
lower than the threshold specified in threshold.
Coherence is defined in Vaughan and Nowak, 1996 [4].
It is a Fourier frequency dependent measure of the linear correlation
between time series measured simultaneously in two energy channels.
Compute an averaged Coherence function of cross spectrum by computing
coherence function of each segment and averaging them. The return type
is a tuple with first element as the coherence function and the second
element as the corresponding uncertainty associated with it.
Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.
Returns:
(coh, uncertainty)tuple of np.ndarray
Tuple comprising the coherence function and uncertainty.
Clean up the list of attributes, only giving out those pointing to data.
List all the attributes that point directly to valid data. This method goes through all the
attributes of the class, eliminating methods, properties, and attributes that are complicated
to serialize such as other StingrayObject, or arrays of objects.
This function does not make difference between array-like data and scalar data.
Returns:
data_attributeslist of str
List of attributes pointing to data that are not methods, properties,
or other StingrayObject instances.
This correction is based on the formula given in Zhang et al. 2015, assuming
a constant dead time for all events.
For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad
Detected background count rate. This is important to estimate when deadtime is given by the
combination of the source counts and background counts (e.g. in an imaging X-ray detector).
paralyzable: bool, default False
If True, the dead time correction is done assuming a paralyzable
dead time. If False, the correction is done assuming a non-paralyzable
(more common) dead time.
limit_kint, default 200
Limit to this value the number of terms in the inner loops of
calculations. Check the plots returned by the check_B and
check_A functions to test that this number is adequate.
n_approxint, default None
Number of bins to calculate the model power spectrum. If None, it will use the size of
the input frequency array. Relatively simple models (e.g., low count rates compared to
dead time) can use a smaller number of bins to speed up the calculation, and the final
power values will be interpolated.
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Calculate AveragedCrossspectrum from two event lists
Parameters:
events1stingray.EventList
Events from channel 1
events2stingray.EventList
Events from channel 2
dtfloat
The time resolution of the intermediate light curves
(sets the Nyquist frequency)
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
Light curves from channel 2. If arrays, use them as counts
dtfloat
The time resolution of the light curves
(sets the Nyquist frequency)
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
save_allbool, default False
If True, save the cross spectrum of each segment in the cs_all
attribute of the output Crossspectrum object.
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
Create an StingrayObject object from data in a pandas DataFrame.
The dataframe MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Since pandas does not support n-D data, multi-dimensional arrays can be
specified as <colname>_dimN_M_K etc.
See documentation of make_1d_arrays_into_nd for details.
What attribute of the time series will be used as error bar.
segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
Calculate AveragedCrossspectrum from two arrays of event times.
Parameters:
times1np.array
Event arrival times of channel 1
times2np.array
Event arrival times of channel 2
dtfloat
The time resolution of the intermediate light curves
(sets the Nyquist frequency)
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be
averaged. Only relevant (and required) for AveragedCrossspectrum.
gti[[gti0, gti1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
The dataset MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
If AveragedCrossspectrum, you need segment_size
>>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1)
Traceback (most recent call last):
…
ValueError: segment_size must be specified…
And it needs to be finite!
>>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan)
Traceback (most recent call last):
…
ValueError: segment_size must be finite!
List the names of the internal array attributes of the Stingray Object.
These are array attributes that can be set by properties, and are generally indicated
by an underscore followed by the name of the property that links to it (E.g.
_counts in Lightcurve).
By array attributes, we mean the ones with the same size and shape as
main_array_attr (e.g. time in EventList)
Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.
Parameters:
labelsiterable, default None
A list of tuple with xlabel and ylabel as strings.
axislist, tuple, string, default None
Parameter to set axis properties of the matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for the``matplotlib.pyplot.axis()`` method.
titlestr, default None
The title of the plot.
markerstr, default ‘-’
Line style and color of the plot. Line styles and colors are
combined in a single format string, as in 'bo' for blue
circles. See matplotlib.pyplot.plot for more options.
saveboolean, optional, default False
If True, save the figure with specified filename.
filenamestr
File name of the image to save. Depends on the boolean save.
axmatplotlib.Axes object
An axes object to fill with the cross correlation plot.
Return a pretty-printed string representation of the object.
This is useful for debugging, and for interactive use.
Other Parameters:
func_to_applyfunction
A function that modifies the attributes listed in attrs_to_apply.
It must return the modified attributes and a label to be printed.
If None, no function is applied.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
Files that need the astropy.table.Table interface MUST contain
at least a column named like the main_array_attr.
The default ascii format is enhanced CSV (ECSV). Data formats
supporting the serialization of metadata (such as ECSV and HDF5) can
contain all attributes such as mission, gti, etc with
no significant loss of information. Other file formats might lose part
of the metadata, so must be used with care.
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Path and file name for the file to be read.
fmt: str
Available options are ‘pickle’, ‘hea’, and any Table-supported
format such as ‘hdf5’, ‘ascii.ecsv’, etc.
Rebin the cross spectrum to a new frequency resolution df.
Parameters:
df: float
The new frequency resolution
Returns:
bin_cs = Crossspectrum (or one of its subclasses) object
The newly binned cross spectrum or power spectrum.
Note: this object will be of the same type as the object
that called this method. For example, if this method is called
from AveragedPowerspectrum, it will return an object of class
AveragedPowerspectrum, too.
Other Parameters:
f: float
the rebin factor. If specified, it substitutes df with f*self.df
Logarithmic rebin of the periodogram.
The new frequency depends on the previous frequency
modified by a factor f:
\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``
parameter that steers the frequency resolution
Returns:
new_specCrossspectrum (or one of its subclasses) object
The newly binned cross spectrum or power spectrum.
Note: this object will be of the same type as the object
that called this method. For example, if this method is called
from AveragedPowerspectrum, it will return an object of class
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the meta dictionary.
Other Parameters:
no_longdoublebool
If True, reduce the precision of longdouble arrays to double precision.
This needs to be done in some cases, e.g. when the table is to be saved
in an architecture not supporting extended precision (e.g. ARM), but can
also be useful when an extended precision is not needed.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
Since pandas does not support n-D data, multi-dimensional arrays are
converted into columns before the conversion, with names <colname>_dimN_M_K etc.
See documentation of make_nd_into_arrays for details.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
Make an averaged cross spectrum from a light curve by segmenting two
light curves, Fourier-transforming each segment and then averaging the
resulting cross spectra.
Parameters:
data1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object
A light curve from which to compute the cross spectrum. In some cases,
this would be the light curve of the wavelength/energy/frequency band
of interest.
data2: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object
A second light curve to use in the cross spectrum. In some cases, this
would be the wavelength/energy/frequency reference band to compare the
band of interest with.
segment_size: float
The size of each segment to average. Note that if the total duration of
each Lightcurve object in lc1 or lc2 is not an
integer multiple of the segment_size, then any fraction left-over
at the end of the time series will be lost. Otherwise you introduce
artifacts.
The normalization of the (real part of the) cross spectrum.
Other Parameters:
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
dtfloat
The time resolution of the light curve. Only needed when constructing
light curves in the case where data1 or data2 are of :class:EventList
power_type: string, optional, default ``all``
Parameter to choose among complete, real part and magnitude of
the cross spectrum.
silentbool, default False
Do not show a progress bar when generating an averaged cross spectrum.
Useful for the batch execution of many spectra
lc1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects
If True, return the full array of frequencies, otherwise return just the
positive frequencies.
save_allbool, default False
Save all intermediate PDSs used for the final average. Use with care.
This is likely to fill up your RAM on medium-sized datasets, and to
slow down the computation when rebinning.
skip_checks: bool
Skip initial checks, for speed or other reasons (you need to trust your
inputs!)
use_common_mean: bool
Averaged cross spectra are normalized in two possible ways: one is by normalizing
each of the single spectra that get averaged, the other is by normalizing after the
averaging. If use_common_mean is selected, the spectrum will be normalized
after the average.
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
Attributes:
freq: numpy.ndarray
The array of mid-bin frequencies that the Fourier transform samples.
power: numpy.ndarray
The array of cross spectra.
power_err: numpy.ndarray
The uncertainties of power.
An approximation for each bin given by power_err=power/sqrt(m).
Where m is the number of power averaged in each bin (by frequency
binning, or averaging power spectra of segments of a light curve).
Note that for a single realization (m=1) the error is equal to the
power.
df: float
The frequency resolution.
m: int
The number of averaged cross spectra.
n: int
The number of time bins per segment of light curve.
nphots1: float
The total number of photons in the first (interest) light curve.
nphots2: float
The total number of photons in the second (reference) light curve.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Name and path of the file to save the object list to.
fmt: str
The file format to store the data in.
Available options are pickle, hdf5, ascii, fits
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Apply a mask to all array attributes of the object
Parameters:
maskarray of bool
The mask. Has to be of the same length as self.time
Returns:
ts_newStingrayObject object
The new object with the mask applied if inplace is False, otherwise the
same object.
Other Parameters:
inplacebool
If True, overwrite the current object. Otherwise, return a new one.
filtered_attrslist of str or None
Array attributes to be filtered. Defaults to all array attributes if None.
The other array attributes will be set to None. The main array attr is always
included.
Compute the classical significances for the powers in the power
spectrum, assuming an underlying noise distribution that follows a
chi-square distributions with 2M degrees of freedom, where M is the
number of powers averaged in each bin.
Note that this function will only produce correct results when the
following underlying assumptions are fulfilled:
The power spectrum is Leahy-normalized
There is no source of variability in the data other than the
periodic signal to be determined with this method. This is
important! If there are other sources of (aperiodic) variability in
the data, this method will not produce correct results, but
instead produce a large number of spurious false positive
detections!
There are no significant instrumental effects changing the
statistical distribution of the powers (e.g. pile-up or dead time)
By default, the method produces (index,p-values) for all powers in
the power spectrum, where index is the numerical index of the power in
question. If a threshold is set, then only powers with p-values
below that threshold with their respective indices. If
trial_correction is set to True, then the threshold will be
corrected for the number of trials (frequencies) in the power spectrum
before being used.
Parameters:
thresholdfloat, optional, default 1
The threshold to be used when reporting p-values of potentially
significant powers. Must be between 0 and 1.
Default is 1 (all p-values will be reported).
trial_correctionbool, optional, default False
A Boolean flag that sets whether the threshold will be
corrected by the number of frequencies before being applied. This
decreases the threshold (p-values need to be lower to count as
significant). Default is False (report all powers) though for
any application where threshold` is set to something meaningful,
this should also be applied!
Returns:
pvalsiterable
A list of (p-value,index) tuples for all powers that have
p-values lower than the threshold specified in threshold.
Coherence is defined in Vaughan and Nowak, 1996 [5].
It is a Fourier frequency dependent measure of the linear correlation
between time series measured simultaneously in two energy channels.
Compute an averaged Coherence function of cross spectrum by computing
coherence function of each segment and averaging them. The return type
is a tuple with first element as the coherence function and the second
element as the corresponding uncertainty associated with it.
Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.
Returns:
(coh, uncertainty)tuple of np.ndarray
Tuple comprising the coherence function and uncertainty.
Compute the fractional rms amplitude in the power spectrum
between two frequencies.
Parameters:
min_freq: float
The lower frequency bound for the calculation.
max_freq: float
The upper frequency bound for the calculation.
Returns:
rms: float
The fractional rms amplitude contained between min_freq and
max_freq.
rms_err: float
The error on the fractional rms amplitude.
Other Parameters:
poisson_noise_levelfloat, default is None
This is the Poisson noise level of the PDS with same
normalization as the PDS. If poissoin_noise_level is None,
the Poisson noise is calculated in the idealcase
e.g. 2./<countrate> for fractional rms normalisation
Dead time and other instrumental effects can alter it.
The user can fit the Poisson noise level outside
this function using the same normalisation of the PDS
and it will get subtracted from powers here.
Clean up the list of attributes, only giving out those pointing to data.
List all the attributes that point directly to valid data. This method goes through all the
attributes of the class, eliminating methods, properties, and attributes that are complicated
to serialize such as other StingrayObject, or arrays of objects.
This function does not make difference between array-like data and scalar data.
Returns:
data_attributeslist of str
List of attributes pointing to data that are not methods, properties,
or other StingrayObject instances.
This correction is based on the formula given in Zhang et al. 2015, assuming
a constant dead time for all events.
For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad
Detected background count rate. This is important to estimate when deadtime is given by the
combination of the source counts and background counts (e.g. in an imaging X-ray detector).
paralyzable: bool, default False
If True, the dead time correction is done assuming a paralyzable
dead time. If False, the correction is done assuming a non-paralyzable
(more common) dead time.
limit_kint, default 200
Limit to this value the number of terms in the inner loops of
calculations. Check the plots returned by the check_B and
check_A functions to test that this number is adequate.
n_approxint, default None
Number of bins to calculate the model power spectrum. If None, it will use the size of
the input frequency array. Relatively simple models (e.g., low count rates compared to
dead time) can use a smaller number of bins to speed up the calculation, and the final
power values will be interpolated.
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Calculate an average power spectrum from an event list.
Parameters:
eventsstingray.EventList
Event list to be analyzed.
dtfloat
The time resolution of the intermediate light curves
(sets the Nyquist frequency).
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be
averaged. Only relevant (and required) for
AveragedPowerspectrum.
gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with
the GTIs of the input object. Can cause errors if there are
overlaps between these GTIs and the input object GTIs. If that
happens, assign the desired GTIs to the input object.
normstr, default “frac”
The normalization of the periodogram. abs is absolute rms, frac
is fractional rms, leahy is Leahy+83 normalization, and none is
the unnormalized periodogram.
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or
on the full light curve. This gives different results
(Alston+2013). By default, we assume the mean is calculated on the
full light curve, but the user can set use_common_mean to False
to calculate it on a per-segment basis.
silentbool, default False
Silence the progress bars.
save_allbool, default False
Save all intermediate PDSs used for the final average.
The time resolution of the light curves
(sets the Nyquist frequency)
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be
averaged. Only relevant (and required) for
AveragedPowerspectrum.
gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with
the GTIs of the input object. Can cause errors if there are
overlaps between these GTIs and the input object GTIs. If that
happens, assign the desired GTIs to the input object.
normstr, default “frac”
The normalization of the periodogram. abs is absolute rms, frac
is fractional rms, leahy is Leahy+83 normalization, and none is
the unnormalized periodogram.
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or
on the full light curve. This gives different results
(Alston+2013). By default, we assume the mean is calculated on the
full light curve, but the user can set use_common_mean to False
to calculate it on a per-segment basis.
silentbool, default False
Silence the progress bars.
save_allbool, default False
Save all intermediate PDSs used for the final average.
The time resolution of the intermediate light curves
(sets the Nyquist frequency).
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be
averaged. Only relevant (and required) for
AveragedPowerspectrum.
gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with
the GTIs of the input object. Can cause errors if there are
overlaps between these GTIs and the input object GTIs. If that
happens, assign the desired GTIs to the input object.
normstr, default “frac”
The normalization of the periodogram. abs is absolute rms, frac
is fractional rms, leahy is Leahy+83 normalization, and none is
the unnormalized periodogram.
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or
on the full light curve. This gives different results
(Alston+2013). By default, we assume the mean is calculated on the
full light curve, but the user can set use_common_mean to False
to calculate it on a per-segment basis.
silentbool, default False
Silence the progress bars.
save_allbool, default False
Save all intermediate PDSs used for the final average.
Create an StingrayObject object from data in a pandas DataFrame.
The dataframe MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Since pandas does not support n-D data, multi-dimensional arrays can be
specified as <colname>_dimN_M_K etc.
See documentation of make_1d_arrays_into_nd for details.
What attribute of the time series will be used as error bar.
segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
silentbool, default False
Silence the progress bars
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
save_allbool, default False
Save all intermediate PDSs used for the final average.
Calculate an average power spectrum from an array of event times.
Parameters:
timesnp.array
Event arrival times.
dtfloat
The time resolution of the intermediate light curves
(sets the Nyquist frequency).
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be
averaged. Only relevant (and required) for
AveragedPowerspectrum.
gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with
the GTIs of the input object. Can cause errors if there are
overlaps between these GTIs and the input object GTIs. If that
happens, assign the desired GTIs to the input object.
normstr, default “frac”
The normalization of the periodogram. abs is absolute rms, frac
is fractional rms, leahy is Leahy+83 normalization, and none is
the unnormalized periodogram.
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or
on the full light curve. This gives different results
(Alston+2013). By default, we assume the mean is calculated on the
full light curve, but the user can set use_common_mean to False
to calculate it on a per-segment basis.
The dataset MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
If AveragedCrossspectrum, you need segment_size
>>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1)
Traceback (most recent call last):
…
ValueError: segment_size must be specified…
And it needs to be finite!
>>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan)
Traceback (most recent call last):
…
ValueError: segment_size must be finite!
List the names of the internal array attributes of the Stingray Object.
These are array attributes that can be set by properties, and are generally indicated
by an underscore followed by the name of the property that links to it (E.g.
_counts in Lightcurve).
By array attributes, we mean the ones with the same size and shape as
main_array_attr (e.g. time in EventList)
The formula used to calculate the upper limit assumes the Leahy
normalization.
If the periodogram is in another normalization, we will internally
convert it to Leahy before calculating the upper limit.
Parameters:
fmin: float
The minimum frequency to search (defaults to the first nonzero bin)
fmax: float
The maximum frequency to search (defaults to the Nyquist frequency)
Returns:
a: float
The modulation amplitude that could produce P>pmeas with 1 - c
probability.
Other Parameters:
c: float
The confidence value for the upper limit (e.g. 0.95 = 95%)
Examples
>>> pds=Powerspectrum()>>> pds.norm="leahy">>> pds.freq=np.arange(0.,5.)>>> # Note: this pds has 40 as maximum value between 2 and 5 Hz>>> pds.power=np.array([100000,1,1,40,1])>>> pds.m=1>>> pds.nphots=30000>>> assertnp.isclose(... pds.modulation_upper_limit(fmin=2,fmax=5,c=0.99),... 0.10164,... atol=0.0001)
Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.
Parameters:
labelsiterable, default None
A list of tuple with xlabel and ylabel as strings.
axislist, tuple, string, default None
Parameter to set axis properties of the matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for the``matplotlib.pyplot.axis()`` method.
titlestr, default None
The title of the plot.
markerstr, default ‘-’
Line style and color of the plot. Line styles and colors are
combined in a single format string, as in 'bo' for blue
circles. See matplotlib.pyplot.plot for more options.
saveboolean, optional, default False
If True, save the figure with specified filename.
filenamestr
File name of the image to save. Depends on the boolean save.
axmatplotlib.Axes object
An axes object to fill with the cross correlation plot.
Return a pretty-printed string representation of the object.
This is useful for debugging, and for interactive use.
Other Parameters:
func_to_applyfunction
A function that modifies the attributes listed in attrs_to_apply.
It must return the modified attributes and a label to be printed.
If None, no function is applied.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
Files that need the astropy.table.Table interface MUST contain
at least a column named like the main_array_attr.
The default ascii format is enhanced CSV (ECSV). Data formats
supporting the serialization of metadata (such as ECSV and HDF5) can
contain all attributes such as mission, gti, etc with
no significant loss of information. Other file formats might lose part
of the metadata, so must be used with care.
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Path and file name for the file to be read.
fmt: str
Available options are ‘pickle’, ‘hea’, and any Table-supported
format such as ‘hdf5’, ‘ascii.ecsv’, etc.
Logarithmic rebin of the periodogram.
The new frequency depends on the previous frequency
modified by a factor f:
\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``
parameter that steers the frequency resolution
Returns:
new_specCrossspectrum (or one of its subclasses) object
The newly binned cross spectrum or power spectrum.
Note: this object will be of the same type as the object
that called this method. For example, if this method is called
from AveragedPowerspectrum, it will return an object of class
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the meta dictionary.
Other Parameters:
no_longdoublebool
If True, reduce the precision of longdouble arrays to double precision.
This needs to be done in some cases, e.g. when the table is to be saved
in an architecture not supporting extended precision (e.g. ARM), but can
also be useful when an extended precision is not needed.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
Since pandas does not support n-D data, multi-dimensional arrays are
converted into columns before the conversion, with names <colname>_dimN_M_K etc.
See documentation of make_nd_into_arrays for details.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
Make an averaged periodogram from a light curve by segmenting the light
curve, Fourier-transforming each segment and then averaging the
resulting periodograms.
Parameters:
data: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object
The light curve data to be Fourier-transformed.
segment_size: float
The size of each segment to average. Note that if the total
duration of each Lightcurve object in lc is not an integer
multiple of the segment_size, then any fraction left-over at the
end of the time series will be lost.
[[gti0_0,gti0_1],[gti1_0,gti1_1],...] – Good Time intervals.
This choice overrides the GTIs in the single light curves. Use with
care, especially if these GTIs have overlaps with the input
object GTIs! If you’re getting errors regarding your GTIs, don’t
use this and only give GTIs to the input object before making
the power spectrum.
silentbool, default False
Do not show a progress bar when generating an averaged cross spectrum.
Useful for the batch execution of many spectra.
dt: float
The time resolution of the light curve. Only needed when constructing
light curves in the case where data is of :class:EventList.
save_allbool, default False
Save all intermediate PDSs used for the final average. Use with care.
This is likely to fill up your RAM on medium-sized datasets, and to
slow down the computation when rebinning.
skip_checks: bool
Skip initial checks, for speed or other reasons (you need to trust your
inputs!).
The array of mid-bin frequencies that the Fourier transform samples.
power: numpy.ndarray
The array of normalized powers
power_err: numpy.ndarray
The uncertainties of power.
An approximation for each bin given by power_err=power/sqrt(m).
Where m is the number of power averaged in each bin (by frequency
binning, or averaging power spectra of segments of a light curve).
Note that for a single realization (m=1) the error is equal to the
power.
unnorm_power: numpy.ndarray
The array of unnormalized powers
unnorm_power_err: numpy.ndarray
The uncertainties of unnorm_power.
df: float
The frequency resolution.
m: int
The number of averaged periodograms.
n: int
The number of data points in the light curve.
nphots: float
The total number of photons in the light curve.
cs_all: list of :class:`Powerspectrum` objects
The list of all periodograms used to calculate the average periodogram.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Name and path of the file to save the object list to.
fmt: str
The file format to store the data in.
Available options are pickle, hdf5, ascii, fits
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Apply a mask to all array attributes of the object
Parameters:
maskarray of bool
The mask. Has to be of the same length as self.time
Returns:
ts_newStingrayObject object
The new object with the mask applied if inplace is False, otherwise the
same object.
Other Parameters:
inplacebool
If True, overwrite the current object. Otherwise, return a new one.
filtered_attrslist of str or None
Array attributes to be filtered. Defaults to all array attributes if None.
The other array attributes will be set to None. The main array attr is always
included.
Compute the classical significances for the powers in the power
spectrum, assuming an underlying noise distribution that follows a
chi-square distributions with 2M degrees of freedom, where M is the
number of powers averaged in each bin.
Note that this function will only produce correct results when the
following underlying assumptions are fulfilled:
The power spectrum is Leahy-normalized
There is no source of variability in the data other than the
periodic signal to be determined with this method. This is important!
If there are other sources of (aperiodic) variability in the data, this
method will not produce correct results, but instead produce a large
number of spurious false positive detections!
There are no significant instrumental effects changing the
statistical distribution of the powers (e.g. pile-up or dead time)
By default, the method produces (index,p-values) for all powers in
the power spectrum, where index is the numerical index of the power in
question. If a threshold is set, then only powers with p-values
below that threshold with their respective indices. If
trial_correction is set to True, then the threshold will be corrected
for the number of trials (frequencies) in the power spectrum before
being used.
Parameters:
thresholdfloat, optional, default 1
The threshold to be used when reporting p-values of potentially
significant powers. Must be between 0 and 1.
Default is 1 (all p-values will be reported).
trial_correctionbool, optional, default False
A Boolean flag that sets whether the threshold will be corrected
by the number of frequencies before being applied. This decreases
the threshold (p-values need to be lower to count as significant).
Default is False (report all powers) though for any application
where threshold` is set to something meaningful, this should also
be applied!
Returns:
pvalsiterable
A list of (index,p-value) tuples for all powers that have p-values
lower than the threshold specified in threshold.
Coherence is defined in Vaughan and Nowak, 1996 [6].
It is a Fourier frequency dependent measure of the linear correlation
between time series measured simultaneously in two energy channels.
Compute an averaged Coherence function of cross spectrum by computing
coherence function of each segment and averaging them. The return type
is a tuple with first element as the coherence function and the second
element as the corresponding uncertainty associated with it.
Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.
Returns:
(coh, uncertainty)tuple of np.ndarray
Tuple comprising the coherence function and uncertainty.
Clean up the list of attributes, only giving out those pointing to data.
List all the attributes that point directly to valid data. This method goes through all the
attributes of the class, eliminating methods, properties, and attributes that are complicated
to serialize such as other StingrayObject, or arrays of objects.
This function does not make difference between array-like data and scalar data.
Returns:
data_attributeslist of str
List of attributes pointing to data that are not methods, properties,
or other StingrayObject instances.
This correction is based on the formula given in Zhang et al. 2015, assuming
a constant dead time for all events.
For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad
Detected background count rate. This is important to estimate when deadtime is given by the
combination of the source counts and background counts (e.g. in an imaging X-ray detector).
paralyzable: bool, default False
If True, the dead time correction is done assuming a paralyzable
dead time. If False, the correction is done assuming a non-paralyzable
(more common) dead time.
limit_kint, default 200
Limit to this value the number of terms in the inner loops of
calculations. Check the plots returned by the check_B and
check_A functions to test that this number is adequate.
n_approxint, default None
Number of bins to calculate the model power spectrum. If None, it will use the size of
the input frequency array. Relatively simple models (e.g., low count rates compared to
dead time) can use a smaller number of bins to speed up the calculation, and the final
power values will be interpolated.
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Calculate AveragedCrossspectrum from two event lists
Parameters:
events1stingray.EventList
Events from channel 1
events2stingray.EventList
Events from channel 2
dtfloat
The time resolution of the intermediate light curves
(sets the Nyquist frequency)
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
Light curves from channel 2. If arrays, use them as counts
dtfloat
The time resolution of the light curves
(sets the Nyquist frequency)
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
save_allbool, default False
If True, save the cross spectrum of each segment in the cs_all
attribute of the output Crossspectrum object.
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
Create an StingrayObject object from data in a pandas DataFrame.
The dataframe MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Since pandas does not support n-D data, multi-dimensional arrays can be
specified as <colname>_dimN_M_K etc.
See documentation of make_1d_arrays_into_nd for details.
What attribute of the time series will be used as error bar.
segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged.
Only relevant (and required) for AveragedCrossspectrum
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
Calculate AveragedCrossspectrum from two arrays of event times.
Parameters:
times1np.array
Event arrival times of channel 1
times2np.array
Event arrival times of channel 2
dtfloat
The time resolution of the intermediate light curves
(sets the Nyquist frequency)
Other Parameters:
segment_sizefloat
The length, in seconds, of the light curve segments that will be
averaged. Only relevant (and required) for AveragedCrossspectrum.
gti[[gti0, gti1], …]
Good Time intervals. Defaults to the common GTIs from the two input
objects. Could throw errors if these GTIs have overlaps with the
input object GTIs! If you’re getting errors regarding your GTIs,
don’t use this and only give GTIs to the input objects before
making the cross spectrum.
normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is
fractional rms, “leahy” is Leahy+83 normalization, and “none” is the
unnormalized periodogram
use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on
the full light curve. This gives different results (Alston+2013).
Here we assume the mean is calculated on the full light curve, but
the user can set use_common_mean to False to calculate it on a
per-segment basis.
fullspecbool, default False
Return the full periodogram, including negative frequencies
silentbool, default False
Silence the progress bars
power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’,
the real part
The dataset MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
If AveragedCrossspectrum, you need segment_size
>>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1)
Traceback (most recent call last):
…
ValueError: segment_size must be specified…
And it needs to be finite!
>>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan)
Traceback (most recent call last):
…
ValueError: segment_size must be finite!
List the names of the internal array attributes of the Stingray Object.
These are array attributes that can be set by properties, and are generally indicated
by an underscore followed by the name of the property that links to it (E.g.
_counts in Lightcurve).
By array attributes, we mean the ones with the same size and shape as
main_array_attr (e.g. time in EventList)
Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.
Parameters:
labelsiterable, default None
A list of tuple with xlabel and ylabel as strings.
axislist, tuple, string, default None
Parameter to set axis properties of the matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for the``matplotlib.pyplot.axis()`` method.
titlestr, default None
The title of the plot.
markerstr, default ‘-’
Line style and color of the plot. Line styles and colors are
combined in a single format string, as in 'bo' for blue
circles. See matplotlib.pyplot.plot for more options.
saveboolean, optional, default False
If True, save the figure with specified filename.
filenamestr
File name of the image to save. Depends on the boolean save.
axmatplotlib.Axes object
An axes object to fill with the cross correlation plot.
Return the power colors of the dynamical power spectrum.
Parameters:
freq_edges: iterable
The edges of the frequency bins to be used for the power colors.
freqs_to_exclude1-d or 2-d iterable, optional, default None
The ranges of frequencies to exclude from the calculation of the power color.
For example, the frequencies containing strong QPOs.
A 1-d iterable should contain two values for the edges of a single range. (E.g.
[0.1,0.2]). [[0.1,0.2],[3,4]] will exclude the ranges 0.1-0.2 Hz and
3-4 Hz.
poisson_levelfloat or iterable, optional
Defaults to the theoretical Poisson noise level (e.g. 2 for Leahy normalization).
The Poisson noise level of the power spectrum. If iterable, it should have the same
length as frequency. (This might apply to the case of a power spectrum with a
strong dead time distortion)
Returns:
pc0: np.ndarray
pc0_err: np.ndarray
pc1: np.ndarray
pc1_err: np.ndarray
The power colors for each spectrum and their respective errors
Return a pretty-printed string representation of the object.
This is useful for debugging, and for interactive use.
Other Parameters:
func_to_applyfunction
A function that modifies the attributes listed in attrs_to_apply.
It must return the modified attributes and a label to be printed.
If None, no function is applied.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
Files that need the astropy.table.Table interface MUST contain
at least a column named like the main_array_attr.
The default ascii format is enhanced CSV (ECSV). Data formats
supporting the serialization of metadata (such as ECSV and HDF5) can
contain all attributes such as mission, gti, etc with
no significant loss of information. Other file formats might lose part
of the metadata, so must be used with care.
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Path and file name for the file to be read.
fmt: str
Available options are ‘pickle’, ‘hea’, and any Table-supported
format such as ‘hdf5’, ‘ascii.ecsv’, etc.
Rebin the cross spectrum to a new frequency resolution df.
Parameters:
df: float
The new frequency resolution
Returns:
bin_cs = Crossspectrum (or one of its subclasses) object
The newly binned cross spectrum or power spectrum.
Note: this object will be of the same type as the object
that called this method. For example, if this method is called
from AveragedPowerspectrum, it will return an object of class
AveragedPowerspectrum, too.
Other Parameters:
f: float
the rebin factor. If specified, it substitutes df with f*self.df
Rebin the Dynamic Power Spectrum to a new time resolution, by summing contiguous intervals.
This is different from meth:DynamicalPowerspectrum.rebin_time in that it averages n
consecutive intervals regardless of their distance in time. rebin_time will instead
average intervals that are separated at most by a time dt_new.
Rebin the Dynamic Power Spectrum to a new frequency resolution.
Rebinning is an in-place operation, i.e. will replace the existing
dyn_ps attribute.
While the new resolution does not need to be an integer of the previous frequency
resolution, be aware that if this is the case, the last frequency bin will be cut
off by the fraction left over by the integer division
Parameters:
df_new: float
The new frequency resolution of the dynamical power spectrum.
Must be larger than the frequency resolution of the old dynamical
power spectrum!
Logarithmic rebin of the periodogram.
The new frequency depends on the previous frequency
modified by a factor f:
\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``
parameter that steers the frequency resolution
Returns:
new_specCrossspectrum (or one of its subclasses) object
The newly binned cross spectrum or power spectrum.
Note: this object will be of the same type as the object
that called this method. For example, if this method is called
from AveragedPowerspectrum, it will return an object of class
Rebin the Dynamic Power Spectrum to a new time resolution.
Note: this is not changing the time resolution of the input light
curve! dt is the integration time of each line of the dynamical power
spectrum (typically, an integer multiple of segment_size).
While the new resolution does not need to be an integer of the previous time
resolution, be aware that if this is the case, the last time bin will be cut
off by the fraction left over by the integer division
Parameters:
dt_new: float
The new time resolution of the dynamical power spectrum.
Must be larger than the time resolution of the old dynamical power
spectrum!
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the meta dictionary.
Other Parameters:
no_longdoublebool
If True, reduce the precision of longdouble arrays to double precision.
This needs to be done in some cases, e.g. when the table is to be saved
in an architecture not supporting extended precision (e.g. ARM), but can
also be useful when an extended precision is not needed.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
Since pandas does not support n-D data, multi-dimensional arrays are
converted into columns before the conversion, with names <colname>_dimN_M_K etc.
See documentation of make_nd_into_arrays for details.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
Create a dynamical power spectrum, also often called a spectrogram.
This class will divide a Lightcurve object into segments of
length segment_size, create a power spectrum for each segment and store
all powers in a matrix as a function of both time (using the mid-point of
each segment) and frequency.
This is often used to trace changes in period of a (quasi-)periodic signal
over time.
[[gti0_0,gti0_1],[gti1_0,gti1_1],...] – Good Time intervals.
This choice overrides the GTIs in the single light curves. Use with
care, especially if these GTIs have overlaps with the input
object GTIs! If you’re getting errors regarding your GTIs, don’t
use this and only give GTIs to the input object before making
the power spectrum.
sample_time: float
Compulsory for input stingray.EventList data. The time resolution of the
lightcurve that is created internally from the input event lists. Drives the
Nyquist frequency.
Attributes:
segment_size: float
The size of each segment to average. Note that if the total
duration of each input object in lc is not an integer multiple
of the segment_size, then any fraction left-over at the end of the
time series will be lost.
dyn_psnp.ndarray
The matrix of normalized squared absolute values of Fourier
amplitudes. The axis are given by the freq
and time attributes.
norm: {``leahy`` | ``frac`` | ``abs`` | ``none``}
The normalization of the periodogram.
freq: numpy.ndarray
The array of mid-bin frequencies that the Fourier transform samples.
time: numpy.ndarray
The array of mid-point times of each interval used for the dynamical
power spectrum.
df: float
The frequency resolution.
dt: float
The time resolution of the dynamical spectrum. It is not the time resolution of the
input light curve. It is the integration time of each line of the dynamical power
spectrum (typically, an integer multiple of segment_size).
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Name and path of the file to save the object list to.
fmt: str
The file format to store the data in.
Available options are pickle, hdf5, ascii, fits
A string indicating the size of the correlation output.
See the relevant scipy documentation [scipy-docs]
for more details.
norm: {``none``, ``variance``}
if “variance”, the cross correlation is normalized so that perfect
correlation gives 1, and perfect anticorrelation gives -1. See
Gaskell & Peterson 1987, Gardner & Done 2017
The first light curve data for correlation calculations.
lc2: :class:`stingray.Lightcurve`
The light curve data for the correlation calculations.
cross: :class: `stingray.Crossspectrum`
The cross spectrum data for the correlation calculations.
corr: numpy.ndarray
An array of correlation data calculated from two light curves
time_lags: numpy.ndarray
An array of all possible time lags against which each point in corr is calculated
dt: float
The time resolution of each light curve (used in time_lag calculations)
time_shift: float
Time lag that gives maximum value of correlation between two light curves.
There will be maximum correlation between light curves if one of the light curve
is shifted by time_shift.
n: int
Number of points in self.corr (length of cross-correlation data)
auto: bool
An internal flag to indicate whether this is a cross-correlation or an auto-correlation.
Calculate the cross correlation against all possible time lags, both positive and negative.
The method signal.correlation_lags() uses SciPy versions >= 1.6.1 ([scipy-docs-lag])
Parameters:
dt: float, optional, default ``1.0``
Time resolution of the light curve, should be passed when object is populated with
correlation data and no information about light curve can be extracted. Used to
calculate time_lags.
Returns:
self.time_shift: float
Value of the time lag that gives maximum value of correlation between two light curves.
self.time_lags: numpy.ndarray
An array of time_lags calculated from correlation data
Plot the Crosscorrelation as function using Matplotlib.
Plot the Crosscorrelation object on a graph self.time_lags on x-axis and
self.corr on y-axis
Parameters:
labelsiterable, default None
A list of tuple with xlabel and ylabel as strings.
axislist, tuple, string, default None
Parameter to set axis properties of matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for matplotlib.pyplot.axis() function.
titlestr, default None
The title of the plot.
markerstr, default -
Line style and color of the plot. Line styles and colors are
combined in a single format string, as in 'bo' for blue
circles. See matplotlib.pyplot.plot for more options.
saveboolean, optional (default=False)
If True, save the figure with specified filename.
filenamestr
File name of the image to save. Depends on the boolean save.
axmatplotlib.Axes object
An axes object to fill with the cross correlation plot.
Calculate the cross correlation against all possible time lags, both positive and negative.
The method signal.correlation_lags() uses SciPy versions >= 1.6.1 ([scipy-docs-lag])
Parameters:
dt: float, optional, default ``1.0``
Time resolution of the light curve, should be passed when object is populated with
correlation data and no information about light curve can be extracted. Used to
calculate time_lags.
Returns:
self.time_shift: float
Value of the time lag that gives maximum value of correlation between two light curves.
self.time_lags: numpy.ndarray
An array of time_lags calculated from correlation data
Plot the Crosscorrelation as function using Matplotlib.
Plot the Crosscorrelation object on a graph self.time_lags on x-axis and
self.corr on y-axis
Parameters:
labelsiterable, default None
A list of tuple with xlabel and ylabel as strings.
axislist, tuple, string, default None
Parameter to set axis properties of matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for matplotlib.pyplot.axis() function.
titlestr, default None
The title of the plot.
markerstr, default -
Line style and color of the plot. Line styles and colors are
combined in a single format string, as in 'bo' for blue
circles. See matplotlib.pyplot.plot for more options.
saveboolean, optional (default=False)
If True, save the figure with specified filename.
filenamestr
File name of the image to save. Depends on the boolean save.
axmatplotlib.Axes object
An axes object to fill with the cross correlation plot.
The two input light curve must be strictly simultaneous, and recorded by
two independent detectors with similar responses, so that the count rates
are similar and dead time is independent.
The method does not apply to different energy channels of the same
instrument, or to the signal observed by two instruments with very
different responses. See the paper for caveats.
Parameters:
data1Lightcurve or EventList
Input data for channel 1
data2Lightcurve or EventList
Input data for channel 2. Must be strictly simultaneous to data1
and, if a light curve, have the same binning time. Also, it must be
strictly independent, e.g. from a different detector. There must be
no dead time cross-talk between the two time series.
segment_size: float
The final Fourier products are averaged over many segments of the
input light curves. This is the length of each segment being averaged.
Note that the light curve must be long enough to have at least 30
segments, as the result gets better as one averages more and more
segments.
dtfloat
Time resolution of the light curves used to produce periodograms
The content of results depends on whether return_objects is
True or False.
If return_objects==False,
results is a Table with the following columns:
pds1: the corrected PDS of lc1
pds2: the corrected PDS of lc2
cs: the corrected cospectrum
ptot: the corrected PDS of lc1 + lc2
If return_objects is True, results is a dict, with keys
named like the columns
listed above but with AveragePowerspectrum or
AverageCrossspectrum objects instead of arrays.
Other Parameters:
plotbool, default False
Plot diagnostics: check if the smoothed Fourier difference scatter is
a good approximation of the data scatter.
axmatplotlib.axes.axes object
If not None and plot is True, use this axis object to produce
the diagnostic plot. Otherwise, create a new figure.
smoothing_alg{‘gauss’, …}
Smoothing algorithm. For now, the only smoothing algorithm allowed is
gauss, which applies a Gaussian Filter from scipy.
smoothing_lengthint, default segment_size*3
Number of bins to smooth in gaussian window smoothing
verbose: bool, default False
Print out information on the outcome of the algorithm (recommended)
tolerancefloat, default 0.05
Accepted relative error on the FAD-corrected Fourier amplitude, to be
used as success diagnostics.
Should be
`stdtheor=2/np.sqrt(n)std=(average_corrected_fourier_diff/n).std()np.abs((std-stdtheor)/stdtheor)<tolerance`
strictbool, default False
Decide what to do if the condition on tolerance is not met. If True,
raise a RuntimeError. If False, just throw a warning.
output_filestr, default None
Name of an output file (any extension automatically recognized by
Astropy is fine)
The two input light curve must be strictly simultaneous, and recorded by
two independent detectors with similar responses, so that the count rates
are similar and dead time is independent.
The method does not apply to different energy channels of the same
instrument, or to the signal observed by two instruments with very
different responses. See the paper for caveats.
Parameters:
lc1: class:`stingray.ligthtcurve.Lightcurve`
Light curve from channel 1
lc2: class:`stingray.ligthtcurve.Lightcurve`
Light curve from channel 2. Must be strictly simultaneous to lc1
and have the same binning time. Also, it must be strictly independent,
e.g. from a different detector. There must be no dead time cross-talk
between the two light curves.
segment_size: float
The final Fourier products are averaged over many segments of the
input light curves. This is the length of each segment being averaged.
Note that the light curve must be long enough to have at least 30
segments, as the result gets better as one averages more and more
segments.
The content of results depends on whether return_objects is
True or False.
If return_objects==False,
results is a Table with the following columns:
pds1: the corrected PDS of lc1
pds2: the corrected PDS of lc2
cs: the corrected cospectrum
ptot: the corrected PDS of lc1 + lc2
If return_objects is True, results is a dict, with keys
named like the columns
listed above but with AveragePowerspectrum or
AverageCrossspectrum objects instead of arrays.
Other Parameters:
plotbool, default False
Plot diagnostics: check if the smoothed Fourier difference scatter is
a good approximation of the data scatter.
axmatplotlib.axes.axes object
If not None and plot is True, use this axis object to produce
the diagnostic plot. Otherwise, create a new figure.
smoothing_alg{‘gauss’, …}
Smoothing algorithm. For now, the only smoothing algorithm allowed is
gauss, which applies a Gaussian Filter from scipy.
smoothing_lengthint, default segment_size*3
Number of bins to smooth in gaussian window smoothing
verbose: bool, default False
Print out information on the outcome of the algorithm (recommended)
tolerancefloat, default 0.05
Accepted relative error on the FAD-corrected Fourier amplitude, to be
used as success diagnostics.
Should be
`stdtheor=2/np.sqrt(n)std=(average_corrected_fourier_diff/n).std()np.abs((std-stdtheor)/stdtheor)<tolerance`
strictbool, default False
Decide what to do if the condition on tolerance is not met. If True,
raise a RuntimeError. If False, just throw a warning.
output_filestr, default None
Name of an output file (any extension automatically recognized by
Astropy is fine)
This function produces a plot of \(A_k - r_0^2 t_b^2\) vs \(k\), to visually check that
\(A_k \rightarrow r_0^2 t_b^2\) for \(k\rightarrow\infty\), as per Eq. 43 in Zhang+95.
With this function is possible to determine how many inner loops k (limit_k in function
pds_model_zhang) are necessary for a correct approximation of the dead time model
Parameters:
ratefloat
Count rate, either incident or detected (use the rate_is_incident bool to specify)
tdfloat
Dead time
tbfloat
Bin time of the light curve
Other Parameters:
max_kint
Maximum k to plot
save_tostr, default None
If not None, save the plot to this file
linthreshfloat, default 0.000001
Linear threshold for the “symlog” scale of the plot
rate_is_incidentbool, default True
If True, the input rate is the incident count rate. If False, it is the detected one.
Check that \(B\rightarrow 0\) for \(k\rightarrow \infty\).
This function produces a plot of \(B_k\) vs \(k\), to visually check that
\(B_k \rightarrow 0\) for \(k\rightarrow\infty\), as per Eq. 43 in Zhang+95.
With this function is possible to determine how many inner loops k (limit_k in function
pds_model_zhang) are necessary for a correct approximation of the dead time model
Parameters:
ratefloat
Count rate, either incident or detected (use the rate_is_incident bool to specify)
tdfloat
Dead time
tbfloat
Bin time of the light curve
Other Parameters:
max_kint
Maximum k to plot
save_tostr, default None
If not None, save the plot to this file
linthreshfloat, default 0.000001
Linear threshold for the “symlog” scale of the plot
rate_is_incidentbool, default True
If True, the input rate is the incident count rate. If False, it is the detected one.
Limit to this value the number of terms in the inner loops of
calculations. Check the plots returned by the check_B and
check_A functions to test that this number is adequate.
background_ratefloat, default 0
Detected background count rate. This is important to estimate when deadtime is given by the
combination of the source counts and background counts (e.g. in an imaging X-ray detector).
n_approxint, default None
Number of bins to calculate the model power spectrum. If None, it will use the size of
the input frequency array. Relatively simple models (e.g., low count rates compared to
dead time) can use a smaller number of bins to speed up the calculation, and the final
power values will be interpolated.
Limit to this value the number of terms in the inner loops of
calculations. Check the plots returned by the check_B and
check_A functions to test that this number is adequate.
rate_is_incidentbool, default True
If True, the input rate is the incident count rate. If False, it is the
detected count rate.
Calculate incident countrate given dead time and detected countrate.
Parameters:
tdfloat
Dead time
r_0float
Detected countrate
Higher-Order Fourier and Spectral Timing Products¶
These classes implement higher-order Fourier analysis products (e.g. Bispectrum) and
Spectral Timing related methods taking advantage of both temporal and spectral information in
modern data sets.
Bispectrum is a higher order time series analysis method and is calculated by
indirect method as Fourier transform of triple auto-correlation function also called as
3rd order cumulant.
Maximum lag on both positive and negative sides of
3rd order cumulant (Similar to lags in correlation).
if None, max lag is set to one-half of length of light curve.
Flag to decide biased or unbiased normalization for 3rd order cumulant function.
References
1) The biphase explained: understanding the asymmetries invcoupled Fourier components of astronomical timeseries
by Thomas J. Maccarone Department of Physics, Box 41051, Science Building, Texas Tech University, Lubbock TX 79409-1051
School of Physics and Astronomy, University of Southampton, SO16 4ES
2) T. S. Rao, M. M. Gabr, An Introduction to Bispectral Analysis and Bilinear Time
Series Models, Lecture Notes in Statistics, Volume 24, D. Brillinger, S. Fienberg,
J. Gani, J. Hartigan, K. Krickeberg, Editors, Springer-Verlag, New York, NY, 1984.
Plot the 3rd order cumulant as function of time lags using matplotlib.
Plot the cum3 attribute on a graph with the lags attribute on x-axis and y-axis and
cum3 on z-axis
Parameters:
axislist, tuple, string, default None
Parameter to set axis properties of matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for matplotlib.pyplot.axis() method.
savebool, optionalm, default False
If True, save the figure with specified filename.
filenamestr
File name and path of the image to save. Depends on the boolean save.
Plot the magnitude of bispectrum as function of freq using matplotlib.
Plot the bispec_mag attribute on a graph with freq attribute on the x-axis and y-axis and
the bispec_mag attribute on the z-axis.
Parameters:
axislist, tuple, string, default None
Parameter to set axis properties of matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for matplotlib.pyplot.axis() method.
savebool, optional, default False
If True, save the figure with specified filename and path.
filenamestr
File name and path of the image to save. Depends on the bool save.
Plot the phase of bispectrum as function of freq using matplotlib.
Plot the bispec_phase attribute on a graph with phase attribute on the x-axis and
y-axis and the bispec_phase attribute on the z-axis.
Parameters:
axislist, tuple, string, default None
Parameter to set axis properties of matplotlib figure. For example
it can be a list like [xmin,xmax,ymin,ymax] or any other
acceptable argument for matplotlib.pyplot.axis() function.
savebool, optional, default False
If True, save the figure with specified filename and path.
filenamestr
File name and path of the image to save. Depends on the bool save.
Compute a covariance spectrum for the data. The input data can be
either in event data or pre-made light curves. Event data can either
be in the form of a numpy.ndarray with (timestamp,energy) pairs or
a stingray.events.EventList object. If light curves are formed ahead
of time, then a list of stingray.Lightcurve objects should be passed to the
object, ideally one light curve for each band of interest.
For the case where the data is input as a list of stingray.Lightcurve objects,
the reference band(s) should either be
a list of stingray.Lightcurve objects with the reference band for each band
of interest pre-made, or
None, in which case reference bands will
formed by combining all light curves except for the band of interest.
In the case of event data, band_interest and ref_band_interest can
be (multiple) pairs of energies, and the light curves for the bands of
interest and reference bands will be produced dynamically.
data contains the time series data, either in the form of a
2-D array of (timestamp,energy) pairs for event data, or as a
list of light curves.
Note : The event list must be in sorted order with respect to the
times of arrivals.
dtfloat
The time resolution of the stingray.Lightcurve formed from the energy bin.
Only used if data is an event list.
band_interest{None, iterable of tuples}
If None, all possible energy values will be assumed to be of
interest, and a covariance spectrum in the highest resolution
will be produced.
Note: if the input is a list of stingray.Lightcurve objects, then the user may
supply their energy values here, for construction of a
reference band.
Defines the reference band to be used for comparison with the
bands of interest. If None, all bands except the band of
interest will be used for each band of interest, respectively.
Alternatively, a tuple can be given for event list data, which will
extract the reference band (always excluding the band of interest),
or one may put in a single stingray.Lightcurve object to be used (the same
for each band of interest) or a list of stingray.Lightcurve objects, one for
each band of interest.
stdfloat or np.array or list of numbers
The term std is used to calculate the excess variance of a band.
If std is set to None, default Poisson case is taken and the
std is calculated as mean(lc)**0.5. In the case of a single
float as input, the same is used as the standard deviation which
is also used as the std. And if the std is an iterable of
numbers, their mean is used for the same purpose.
References
[1] Wilkinson, T. and Uttley, P. (2009), Accretion disc variability in the hard state of black hole X-ray binaries. Monthly Notices of the Royal Astronomical Society, 397: 666–676. doi: 10.1111/j.1365-2966.2009.15008.x
An array of arrays with mid point band_interest and their
covariance. It is the array-form of the dictionary energy_covar.
The covariance values are unnormalized.
data contains the time series data, either in the form of a
2-D array of (timestamp,energy) pairs for event data, or as a
list of stingray.Lightcurve objects.
Note : The event list must be in sorted order with respect to the
times of arrivals.
segment_sizefloat
The length of each segment in the averaged covariance spectrum.
The number of segments will be calculated automatically using the
total length of the data set and the segment_size defined here.
dtfloat
The time resolution of the stingray.Lightcurve formed
from the energy bin. Only used if data is an event list.
band_interest{None, iterable of tuples}
If None, all possible energy values will be assumed to be of
interest, and a covariance spectrum in the highest resolution
will be produced.
Note: if the input is a list of stingray.Lightcurve objects,
then the user may supply their energy values here, for construction of a
reference band.
Defines the reference band to be used for comparison with the
bands of interest. If None, all bands except the band of
interest will be used for each band of interest, respectively.
Alternatively, a tuple can be given for event list data, which will
extract the reference band (always excluding the band of interest),
or one may put in a single stingray.Lightcurve object to be used (the same
for each band of interest) or a list of stingray.Lightcurve objects, one for
each band of interest.
stdfloat or np.array or list of numbers
The term std is used to calculate the excess variance of a band.
If std is set to None, default Poisson case is taken and the
std is calculated as mean(lc)**0.5. In the case of a single
float as input, the same is used as the standard deviation which
is also used as the std. And if the std is an iterable of
numbers, their mean is used for the same purpose.
References
Attributes:
unnorm_covarnp.ndarray
An array of arrays with mid point band_interest and their
covariance. It is the array-form of the dictionary energy_covar.
The covariance values are unnormalized.
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Create an StingrayObject object from data in a pandas DataFrame.
The dataframe MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Since pandas does not support n-D data, multi-dimensional arrays can be
specified as <colname>_dimN_M_K etc.
See documentation of make_1d_arrays_into_nd for details.
Create a StingrayObject from data in an xarray Dataset.
The dataset MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
the frequency range over which calculating the variability quantity
energy_speclist or tuple (emin,emax,N,type)
if a list is specified, this is interpreted as a list of bin edges;
if a tuple is provided, this will encode the minimum and maximum
energies, the number of intervals, and lin or log.
Other Parameters:
ref_band[emin,emax], floats; default None
minimum and maximum energy of the reference band. If None, the
full band is used.
the frequency range over which calculating the variability quantity
energy_speclist or tuple (emin,emax,N,type)
if a list is specified, this is interpreted as a list of bin edges;
if a tuple is provided, this will encode the minimum and maximum
energies, the number of intervals, and lin or log.
Other Parameters:
ref_band[emin,emax], floats; default None
minimum and maximum energy of the reference band. If None, the
full band is used.
use_pibool, default False
Use channel instead of energy
Attributes:
events1array-like
list of events used to produce the spectrum
freq_intervalarray-like
interval of frequencies used to calculate the spectrum
energy_intervals[[e00,e01],[e10,e11],...]
energy intervals used for the spectrum
spectrumarray-like
the spectral values, corresponding to each energy interval
Add the array values of two StingrayObject instances element by element, assuming
the main array attributes of the instances match exactly.
All array attrs ending with _err are treated as error bars and propagated with the
sum of squares.
GTIs are crossed, so that only common intervals are saved.
Parameters:
otherStingrayTimeseries object
A second time series object
Other Parameters:
operated_attrslist of str or None
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Apply a mask to all array attributes of the object
Parameters:
maskarray of bool
The mask. Has to be of the same length as self.time
Returns:
ts_newStingrayObject object
The new object with the mask applied if inplace is False, otherwise the
same object.
Other Parameters:
inplacebool
If True, overwrite the current object. Otherwise, return a new one.
filtered_attrslist of str or None
Array attributes to be filtered. Defaults to all array attributes if None.
The other array attributes will be set to None. The main array attr is always
included.
Clean up the list of attributes, only giving out those pointing to data.
List all the attributes that point directly to valid data. This method goes through all the
attributes of the class, eliminating methods, properties, and attributes that are complicated
to serialize such as other StingrayObject, or arrays of objects.
This function does not make difference between array-like data and scalar data.
Returns:
data_attributeslist of str
List of attributes pointing to data that are not methods, properties,
or other StingrayObject instances.
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Create an StingrayObject object from data in a pandas DataFrame.
The dataframe MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
Since pandas does not support n-D data, multi-dimensional arrays can be
specified as <colname>_dimN_M_K etc.
See documentation of make_1d_arrays_into_nd for details.
Create a StingrayObject from data in an xarray Dataset.
The dataset MUST contain at least a column named like the
main_array_attr.
The rest of columns will form the array attributes of the
new object, while the attributes in ds.attrs will
form the new meta attributes of the object.
List the names of the internal array attributes of the Stingray Object.
These are array attributes that can be set by properties, and are generally indicated
by an underscore followed by the name of the property that links to it (E.g.
_counts in Lightcurve).
By array attributes, we mean the ones with the same size and shape as
main_array_attr (e.g. time in EventList)
the frequency range over which calculating the variability quantity
energy_speclist or tuple (emin,emax,N,type)
if a list is specified, this is interpreted as a list of bin edges;
if a tuple is provided, this will encode the minimum and maximum
energies, the number of intervals, and lin or log.
Other Parameters:
ref_band[emin,emax], floats; default None
minimum and maximum energy of the reference band. If None, the
full band is used.
Return a pretty-printed string representation of the object.
This is useful for debugging, and for interactive use.
Other Parameters:
func_to_applyfunction
A function that modifies the attributes listed in attrs_to_apply.
It must return the modified attributes and a label to be printed.
If None, no function is applied.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
Files that need the astropy.table.Table interface MUST contain
at least a column named like the main_array_attr.
The default ascii format is enhanced CSV (ECSV). Data formats
supporting the serialization of metadata (such as ECSV and HDF5) can
contain all attributes such as mission, gti, etc with
no significant loss of information. Other file formats might lose part
of the metadata, so must be used with care.
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Path and file name for the file to be read.
fmt: str
Available options are ‘pickle’, ‘hea’, and any Table-supported
format such as ‘hdf5’, ‘ascii.ecsv’, etc.
Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.
All array attrs ending with _err are treated as error bars and propagated with the
sum of squares.
GTIs are crossed, so that only common intervals are saved.
Parameters:
otherStingrayTimeseries object
A second time series object
Other Parameters:
operated_attrslist of str or None
Array attributes to be operated on. Defaults to all array attributes not ending in
_err.
The other array attributes will be discarded from the time series to avoid
inconsistencies.
error_attrslist of str or None
Array attributes to be operated on with error_operation. Defaults to all array
attributes ending with _err.
error_operationfunction
Function to be called to propagate the errors
inplacebool
If True, overwrite the current time series. Otherwise, return a new one.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the meta dictionary.
Other Parameters:
no_longdoublebool
If True, reduce the precision of longdouble arrays to double precision.
This needs to be done in some cases, e.g. when the table is to be saved
in an architecture not supporting extended precision (e.g. ARM), but can
also be useful when an extended precision is not needed.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
Since pandas does not support n-D data, multi-dimensional arrays are
converted into columns before the conversion, with names <colname>_dimN_M_K etc.
See documentation of make_nd_into_arrays for details.
Array attributes (e.g. time, pi, energy, etc. for
EventList) are converted into columns, while meta attributes
(mjdref, gti, etc.) are saved into the ds.attrs dictionary.
any other formats compatible with the writers in
astropy.table.Table (ascii.ecsv, hdf5, etc.)
..note:
Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str
Name and path of the file to save the object list to.
fmt: str
The file format to store the data in.
Available options are pickle, hdf5, ascii, fits
Upper limit on a sinusoidal modulation, given a measured power in the PDS/Z search.
Eq. 10 in Vaughan+94 and a_from_ssig: they are equivalent but Vaughan+94
corrects further for the response inside an FFT bin and at frequencies close
to Nyquist. These two corrections are added by using fft_corr=True and
nyq_ratio to the correct \(f / f_{Nyq}\) of the FFT peak
To understand the meaning of this amplitude: if the modulation is described by:
..math:: p = overline{p} (1 + a * sin(x))
this function returns a.
If it is a sum of sinusoidal harmonics instead
..math:: p = overline{p} (1 + sum_l a_l * sin(lx))
a is equivalent to \(\sqrt(\sum_l a_l^2)\).
The number of counts in the light curve used to calculate the spectrum
Returns:
a: float
The modulation amplitude that could produce P>pmeas with 1 - c probability
Other Parameters:
n: int
The number of summed powers to obtain pmeas. It can be multiple
harmonics of the PDS, adjacent bins in a PDS summed to collect all the
power in a QPO, or the n in Z^2_n
c: float
The confidence value for the probability (e.g. 0.95 = 95%)
fft_corr: bool
Apply a correction for the expected power concentrated in an FFT bin,
which is about 0.773 on average (it’s 1 at the center of the bin, 2/pi
at the bin edge.
nyq_ratio: float
Ratio of the frequency of this feature with respect to the Nyquist
frequency. Important to know when dealing with FFTs, because the FFT
response decays between 0 and f_Nyq similarly to the response inside
a frequency bin: from 1 at 0 Hz to ~2/pi at f_Nyq
Note:
This is stingray’s original implementation of the probability
distribution for the power spectrum. It is superseded by the
implementation in pds_probability for practical purposes, but
remains here for backwards compatibility and for its educational
value as a clear, explicit implementation of the correct
probability distribution.
Compute the probability of detecting the current power under
the assumption that there is no periodic oscillation in the data.
This computes the single-trial p-value that the power was
observed under the null hypothesis that there is no signal in
the data.
Important: the underlying assumptions that make this calculation valid
are:
the powers in the power spectrum follow a chi-square distribution
the power spectrum is normalized according to [Leahy 1983]_, such
that the powers have a mean of 2 and a variance of 4
there is only white noise in the light curve. That is, there is no
aperiodic variability that would change the overall shape of the power
spectrum.
Also note that the p-value is for a single trial, i.e. the power
currently being tested. If more than one power or more than one power
spectrum are being tested, the resulting p-value must be corrected for the
number of trials (Bonferroni correction).
Mathematical formulation in [Groth 1975]_.
Original implementation in IDL by Anna L. Watts.
Parameters:
powerfloat
The squared Fourier amplitude of a spectrum to be evaluated
nspecint
The number of spectra or frequency bins averaged in power.
This matters because averaging spectra or frequency bins increases
the signal-to-noise ratio, i.e. makes the statistical distributions
of the noise narrower, such that a smaller power might be very
significant in averaged spectra even though it would not be in a single
power spectrum.
Returns:
pvalfloat
The classical p-value of the observed power being consistent with
the null hypothesis of white noise
Number of Gaussian sigmas corresponding to tail probability.
This function computes the value of the characteristic function of a
standard Gaussian distribution for the tail probability equivalent to the
provided p-value, and turns this value into units of standard deviations
away from the Gaussian mean. This allows the user to make a statement
about the signal such as “I detected this pulsation at 4.1 sigma
The example values below are obtained by brute-force integrating the
Gaussian probability density function using the mpmath library
between Nsigma and +inf.
Calculate the single-trial p-value from a total p-value
Let us say that we want to reject a null hypothesis at the
pn level, after executing n different measurements.
This might be the case because, e.g., we
want to have a 1% probability of detecting a signal in an
entire power spectrum, and we need to correct the detection
level accordingly.
The typical procedure is dividing the initial probability
(often called _epsilon_) by the number of trials. This is
called the Bonferroni correction and it is often a good
approximation, when pn is low: p1=pn/n.
However, if pn is close to 1, this approximation gives
incorrect results.
Here we calculate this probability by inverting the Binomial
problem. Given that (see p_multitrial_from_single_trial)
the probability of getting more than one hit in n trials,
given the single-trial probability p, is
\[P (k \geq 1) = 1 - (1 - p)^n,\]
we get the single trial probability from the multi-trial one
from
\[p = 1 - (1 - P)^{(1/n)}\]
This is also known as Šidák correction.
Parameters:
pnfloat
The significance at which we want to reject the null
hypothesis after multiple trials
nint
The number of trials
Returns:
p1float
The significance at which we reject the null hypothesis on
each single trial.
Return the detection level (with probability 1 - epsilon) for a Power
Density Spectrum of nbins bins, normalized a la Leahy (1983), based on
the 2-dof \({\chi}^2\) statistics, corrected for rebinning (n_rebin)
and multiple PDS averaging (n_summed_spectra)
Parameters:
epsilonfloat
The single-trial probability value(s)
Other Parameters:
ntrialint
The number of independent trials (the independent bins of the PDS)
n_summed_spectraint
The number of power density spectra that have been averaged to obtain
this power level
n_rebinint
The number of power density bins that have been averaged to obtain
this power level
Give the probability of a given power level in PDS.
Return the probability of a certain power level in a Power Density
Spectrum of nbins bins, normalized a la Leahy (1983), based on
the 2-dof \({\chi}^2\) statistics, corrected for rebinning (n_rebin)
and multiple PDS averaging (n_summed_spectra)
Parameters:
levelfloat or array of floats
The power level for which we are calculating the probability
Returns:
epsilonfloat
The probability value(s)
Other Parameters:
ntrialint
The number of independent trials (the independent bins of the PDS)
n_summed_spectraint
The number of power density spectra that have been averaged to obtain
this power level
n_rebinint
The number of power density bins that have been averaged to obtain
this power level
The number of counts in the light curve used to calculate the spectrum
Returns:
pf: float
The pulsed fraction that could produce P>pmeas with 1 - c probability
Other Parameters:
n: int
The number of summed powers to obtain pmeas. It can be multiple
harmonics of the PDS, adjacent bins in a PDS summed to collect all the
power in a QPO, or the n in Z^2_n
c: float
The confidence value for the probability (e.g. 0.95 = 95%)
fft_corr: bool
Apply a correction for the expected power concentrated in an FFT bin,
which is about 0.773 on average (it’s 1 at the center of the bin, 2/pi
at the bin edge.
nyq_ratio: float
Ratio of the frequency of this feature with respect to the Nyquist
frequency. Important to know when dealing with FFTs, because the FFT
response decays between 0 and f_Nyq similarly to the response inside
a frequency bin: from 1 at 0 Hz to ~2/pi at f_Nyq
Confidence limits on power, given a (theoretical) signal power.
This is to be used when we expect a given power (e.g. from the pulsed
fraction measured in previous observations) and we want to know the
range of values the measured power could take to a given confidence level.
Adapted from Vaughan et al. 1994, noting that, after appropriate
normalization of the spectral stats, the distribution of powers in the PDS
and the Z^2_n searches is always described by a noncentral chi squared
distribution.
Parameters:
preal: float
The theoretical signal-generated value of power
Returns:
pmeas: [float, float]
The upper and lower confidence interval (a, 1-a) on the measured power
Other Parameters:
n: int
The number of summed powers to obtain the result. It can be multiple
harmonics of the PDS, adjacent bins in a PDS summed to collect all the
power in a QPO, or the n in Z^2_n
Upper limit on signal power, given a measured power in the PDS/Z search.
Adapted from Vaughan et al. 1994, noting that, after appropriate
normalization of the spectral stats, the distribution of powers in the PDS
and the Z^2_n searches is always described by a noncentral chi squared
distribution.
Note that Vaughan+94 gives p(pmeas | preal), while we are interested in
p(real | pmeas), which is not described by the NCX2 stat. Rather than
integrating the CDF of this probability distribution, we start from a
reasonable approximation and fit to find the preal that gives pmeas as
a (e.g.95%) confidence limit.
As Vaughan+94 shows, this power is always larger than the observed one.
This is because we are looking for the maximum signal power that,
combined with noise powers, would give the observed power. This involves
the possibility that noise powers partially cancel out some signal power.
Parameters:
pmeas: float
The measured value of power
Returns:
psig: float
The signal power that could produce P>pmeas with 1 - c probability
Other Parameters:
n: int
The number of summed powers to obtain pmeas. It can be multiple
harmonics of the PDS, adjacent bins in a PDS summed to collect all the
power in a QPO, or the n in Z^2_n
c: float
The confidence value for the probability (e.g. 0.95 = 95%)
stingray.gti.bin_intervals_from_gtis(gtis, segment_size, time, dt=None, fraction_step=1, epsilon=0.001)[source]¶
Compute start/stop times of equal time intervals, compatible with GTIs,
and map them to the indices of an array of time stamps.
Used to start each FFT/PDS/cospectrum from the start of a GTI,
and stop before the next gap in data (end of GTI).
In this case, it is necessary to specify the time array containing the
times of the light curve bins.
Returns start and stop bins of the intervals to use for the PDS.
Parameters:
gtis2-d float array
List of GTIs of the form [[gti0_0,gti0_1],[gti1_0,gti1_1],...].
segment_sizefloat
Length of each time segment.
timearray-like
Array of time stamps.
Returns:
spectrum_start_binsarray-like
List of starting bins in the original time array to use in spectral
calculations.
spectrum_stop_binsarray-like
List of end bins to use in the spectral calculations.
Other Parameters:
dtfloat, default median(diff(time))
Time resolution of the light curve.
epsilonfloat, default 0.001
The tolerance, in fraction of dt, for the comparisons at the
borders.
fraction_stepfloat
If the step is not a full segment_size but less (e.g. a moving
window), this indicates the ratio between step step and
segment_size (e.g. 0.5 means that the window shifts by half
segment_size).
A mask labelling all time stamps that are included in the GTIs versus
those that are not.
new_gtisNx2 array
An array of new GTIs created by this function.
Other Parameters:
safe_intervalfloat or [float,float], default None
A safe interval to exclude at both ends (if single float) or the start
and the end (if pair of values) of GTIs. If None, no safe interval
is applied to data.
min_lengthfloat
An optional minimum length for the GTIs to be applied. Only GTIs longer
than min_length will be considered when creating the mask.
return_new_gtisbool
If True`, return the list of new GTIs (if min_length>0)
dtfloat
Time resolution of the data, i.e. the interval between time stamps.
epsilonfloat
Fraction of dt that is tolerated at the borders of a GTI.
Get the indices of events from different segments of the observation.
This is a generator, yielding the boundaries of each segment and the
corresponding indices in the time array.
Parameters:
timesfloat np.array
Array of times.
gti[[gti00, gti01], [gti10, gti11], …]
Good time intervals.
segment_sizefloat
Length of segments.
Yields:
t0: float
Start time of current segment.
t1: float
End time of current segment.
startidx: int
Start index of the current segment in the time array.
stopidx: int
End index of the current segment in the time array. Note that this is
larger by one, so that time[startidx:stopidx] returns the correct
time interval.
Other Parameters:
check_sortedbool, default True
If True, checks that the time array is sorted.
Examples
>>> times=[0.1,0.2,0.5,0.8,1.1]>>> gtis=[[0,0.55],[0.6,2.1]]>>> vals=generate_indices_of_segment_boundaries_unbinned(... times,gtis,0.5)>>> v0=next(vals)>>> assertnp.allclose(v0[:2],[0,0.5])>>> # Note: 0.5 is not included in the interval>>> assertnp.allclose(v0[2:],[0,2])>>> v1=next(vals)>>> assertnp.allclose(v1[:2],[0.6,1.1])>>> # Again: 1.1 is not included in the interval>>> assertnp.allclose(v1[2:],[3,4])
Base strings of GTI extensions. For missions adding the detector number
to GTI extensions like, e.g., XMM and Chandra, this function
automatically adds the detector number and looks for all matching
GTI extensions (e.g. “STDGTI” will also retrieve “STDGTI05”; “GTI0”
will also retrieve “GTI00501”).
Returns:
gti_list: [[gti00, gti01], [gti10, gti11], …]
List of good time intervals, as the intersection of all matching GTIs.
If there are two matching extensions, with GTIs [[0, 50], [100, 200]]
and [[40, 70]] respectively, this function will return [[40, 50]].
stingray.gti.gti_border_bins(gtis, time, dt=None, epsilon=0.001)[source]¶
Find the indices in a time array corresponding to the borders of GTIs.
GTIs shorter than the bin time are not returned.
Parameters:
gtis2-d float array
List of GTIs of the form [[gti0_0,gti0_1],[gti1_0,gti1_1],...].
timearray-like
Array of time stamps.
Returns:
spectrum_start_binsarray-like
List of starting bins of each GTI
spectrum_stop_binsarray-like
List of stop bins of each GTI. The elements corresponding to these bins
should not be included.
Other Parameters:
dtfloat or array of floats. Default median(diff(time))
Time resolution of the light curve. Can be an array of the same dimension
as time
epsilonfloat, default 0.001
The tolerance, in fraction of dt, for the comparisons at the
borders.
fraction_stepfloat
If the step is not a full segment_size but less (e.g. a moving
window), this indicates the ratio between step step and
segment_size (e.g. 0.5 means that the window shifts by half
segment_size).
Examples
>>> times=np.arange(0.5,13.5)
>>> gti_border_bins([[16.,18.]],times)Traceback (most recent call last):...ValueError: Invalid time interval for the given GTIs
If GTIs are mutually exclusive, it calls append_gtis. Otherwise we put
the extremes of partially overlapping GTIs on an ideal line and look at the
number of opened and closed intervals. When the number of closed and opened
intervals is the same, the full GTI is complete and we close it.
In practice, we assign to each opening time of a GTI the value -1, and
the value 1 to each closing time; when the cumulative sum is zero, the
GTI has ended. The timestamp after each closed GTI is the start of a new
one.
In case one GTI ends exactly where another one starts, the cumulative sum is 0
but we do not want to close. In this case, we make a check that the next element
of the sequence is not equal to the one where we would close.
Compute start/stop times of equal time intervals, compatible with GTIs.
Used to start each FFT/PDS/cospectrum from the start of a GTI,
and stop before the next gap in data (end of GTI).
Parameters:
gtis2-d float array
List of GTIs of the form [[gti0_0,gti0_1],[gti1_0,gti1_1],...]
segment_sizefloat
Length of the time segments
fraction_stepfloat
If the step is not a full segment_size but less (e.g. a moving
window), this indicates the ratio between step step and
segment_size (e.g. 0.5 means that the window shifts by half
segment_size).
Returns:
spectrum_start_timesarray-like
List of starting times to use in the spectral calculations.
spectrum_stop_timesarray-like
List of end times to use in the spectral calculations.
Get the name of a header key or table column from the mission database.
Many entries in the mission database have default values that can be
altered for specific instruments or observing modes. Here, if there is a
definition for a given instrument and mode, we take that, otherwise we use
the default).
Parameters:
infodict
Nested dictionary containing all the information for a given mission.
It can be nested, e.g. contain some info for a given instrument, and
for each observing mode of that instrument.
keystr
The key to read from the info dictionary
defaultobject
The default value. It can be of any type, depending on the expected
type for the entry.
Loads event list from HDU EVENTS of file fits_file, with Good Time
intervals. Optionally, returns additional columns of data from the same
HDU of the events.
Parameters:
fits_filestr
Returns:
retvalsObject with the following attributes:
ev_listarray-like
Event times in Mission Epoch Time
gti_list: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
GTIs in Mission Epoch Time
additional_data: dict
A dictionary, where each key is the one specified in additional_colums.
The data are an array with the values of the specified column in the
fits file.
t_startfloat
Start time in Mission Epoch Time
t_stopfloat
Stop time in Mission Epoch Time
pi_listarray-like
Raw Instrument energy channels
cal_pi_listarray-like
Calibrated PI channels (those that can be easily converted to energy
values, regardless of the instrument setup.)
energy_listarray-like
Energy of each photon in keV (only for NuSTAR, NICER, XMM)
instrstr
Name of the instrument (e.g. EPIC-pn or FPMA)
missionstr
Name of the instrument (e.g. XMM or NuSTAR)
mjdreffloat
MJD reference time for the mission
headerstr
Full header of the FITS file, for debugging purposes
detector_idarray-like, int
Detector id for each photon (e.g. each of the CCDs composing XMM’s or
Chandra’s instruments)
Other Parameters:
additional_columns: list of str, optional
A list of keys corresponding to the additional columns to extract from
the event HDU (ex.: [‘PI’, ‘X’])
gtistringstr
Comma-separated list of accepted GTI extensions (default GTI,STDGTI),
with or without appended integer number denoting the detector
gti_filestr, default None
External GTI file
hdunamestr or int, default 1
Name of the HDU containing the event list
columnstr, default None
The column containing the time values. If None, we use the name
specified in the mission database, and if there is nothing there,
“TIME”
Note : This function is supposed to be used after the plot
function. Otherwise it will save a blank image with no plot.
Parameters:
filenamestr
The name of the image file. Extension must be specified in the
file name. For example filename with png extension will give a
rasterized image while .pdf extension will give a vectorized
output.
kwargskeyword arguments
Keyword arguments to be passed to savefig function of
matplotlib.pyplot. For example use bbox_inches='tight' to
remove the undesirable whitespace around the image.
stingray.utils.baseline_als(x, y, lam=None, p=None, niter=10, return_baseline=False, offset_correction=False)[source]¶
Baseline Correction with Asymmetric Least Squares Smoothing.
Parameters:
xarray-like
the sample time/number/position
yarray-like
the data series corresponding to x
lamfloat
the lambda parameter of the ALS method. This control how much the
baseline can adapt to local changes. A higher value corresponds to a
stiffer baseline
pfloat
the asymmetry parameter of the ALS method. This controls the overall
slope tolerated for the baseline. A higher value correspond to a
higher possible slope
Returns:
y_subtractedarray-like, same size as y
The initial time series, subtracted from the trend
baselinearray-like, same size as y
Fitted baseline. Only returned if return_baseline is True
Other Parameters:
niterint
The number of iterations to perform
return_baselinebool
return the baseline?
offset_correctionbool
also correct for an offset to align with the running mean of the scan
Vaughan et al. 2003, MNRAS 345, 1271 give three measurements of source
intrinsic variance: if a light curve has a total variance of \(S^2\),
and each point has an error bar \(\sigma_{err}\), the excess variance
is defined as
if fvar, return the fractional mean square variability \(F_{var}\).
If none, return the unnormalized excess variance variance
\(\sigma_{XS}\). If norm_xs, return the normalized excess variance
\(\sigma_{XS}\)
Vary slightly the bin time to have a power of two number of bins.
Given an FFT length and a proposed bin time, return a bin time
slightly shorter than the original, that will produce a power-of-two number
of FFT bins.
Parameters:
fftlenint
Number of positive frequencies in a proposed Fourier spectrum
tbinfloat
The proposed time resolution of a light curve
Returns:
resfloat
A time resolution that will produce a Fourier spectrum with fftlen frequencies and
a number of FFT bins that are a power of two
Optimized version of frequentist symmetrical errors.
Uses a lookup table in order to limit the calls to poisson_conf_interval
Parameters:
countsiterable
An array of Poisson-distributed numbers
Returns:
errnumpy.ndarray
An array of uncertainties associated with the Poisson counts in
counts
Examples
>>> fromastropy.statsimportpoisson_conf_interval>>> counts=np.random.randint(0,1000,100)>>> # ---- Do it without the lookup table ---->>> err_low,err_high=poisson_conf_interval(np.asarray(counts),... interval='frequentist-confidence',sigma=1)>>> err_low-=np.asarray(counts)>>> err_high-=np.asarray(counts)>>> err=(np.absolute(err_low)+np.absolute(err_high))/2.0>>> # Do it with this function>>> err_thisfun=poisson_symmetrical_errors(counts)>>> # Test that results are always the same>>> assertnp.allclose(err_thisfun,err)
stingray.utils.rebin_data(x, y, dx_new, yerr=None, method='sum', dx=None)[source]¶
Rebin some data to an arbitrary new data resolution. Either sum
the data points in the new bins or average them.
Parameters:
x: iterable
The dependent variable with some resolution, which can vary throughout
the time series.
y: iterable
The independent variable to be binned
dx_new: float
The new resolution of the dependent variable x
Returns:
xbin: numpy.ndarray
The midpoints of the new bins in x
ybin: numpy.ndarray
The binned quantity y
ybin_err: numpy.ndarray
The uncertainties of the binned values of y.
step_size: float
The size of the binning step
Other Parameters:
yerr: iterable, optional
The uncertainties of y, to be propagated during binning.
This subpackage defines classes and functions related to parametric modelling of various types of
data sets. Currently, most functionality is focused on modelling Fourier products (especially
power spectra and averaged power spectra), but rudimentary functionality exists for modelling
e.g. light curves.
classstingray.modeling.LogLikelihood(x, y, model, **kwargs)[source]¶
Abstract Base Class defining the structure of a LogLikelihood object.
This class cannot be called itself, since each statistical distribution
has its own definition for the likelihood, which should occur in subclasses.
Parameters:
xiterable
x-coordinate of the data. Could be multi-dimensional.
yiterable
y-coordinate of the data. Could be multi-dimensional.
modelan astropy.modeling.FittableModel instance
Your model
kwargs
keyword arguments specific to the individual sub-classes. For
details, see the respective docstrings for each subclass
This is where you define your log-likelihood. Do this, but do it in a subclass!
classstingray.modeling.GaussianLogLikelihood(x, y, yerr, model)[source]¶
Likelihood for data with Gaussian uncertainties.
Astronomers also call this likelihood Chi-Squared, but be aware
that this has nothing to do with the likelihood based on the
Chi-square distribution, which is also defined as in of
PSDLogLikelihood in this module!
Use this class here whenever your data has Gaussian uncertainties.
Parameters:
xiterable
x-coordinate of the data
yiterable
y-coordinte of the data
yerriterable
the uncertainty on the data, as standard deviation
modelan astropy.modeling.FittableModel instance
The model to use in the likelihood.
Attributes:
xiterable
x-coordinate of the data
yiterable
y-coordinte of the data
yerriterable
the uncertainty on the data, as standard deviation
Evaluate the Gaussian log-likelihood for a given set of parameters.
Parameters:
parsnumpy.ndarray
An array of parameters at which to evaluate the model
and subsequently the log-likelihood. Note that the
length of this array must match the free parameters in
model, i.e. npar
negbool, optional, default False
If True, return the negative log-likelihood, i.e.
-loglike, rather than loglike. This is useful e.g.
for optimization routines, which generally minimize
functions.
Returns:
loglikefloat
The log(likelihood) value for the data and model.
classstingray.modeling.PoissonLogLikelihood(x, y, model)[source]¶
Likelihood for data with uncertainties following a Poisson distribution.
This is useful e.g. for (binned) photon count data.
Parameters:
xiterable
x-coordinate of the data
yiterable
y-coordinte of the data
modelan astropy.modeling.FittableModel instance
The model to use in the likelihood.
Attributes:
xiterable
x-coordinate of the data
yiterable
y-coordinte of the data
yerriterable
the uncertainty on the data, as standard deviation
Evaluate the log-likelihood for a given set of parameters.
Parameters:
parsnumpy.ndarray
An array of parameters at which to evaluate the model
and subsequently the log-likelihood. Note that the
length of this array must match the free parameters in
model, i.e. npar
negbool, optional, default False
If True, return the negative log-likelihood, i.e.
-loglike, rather than loglike. This is useful e.g.
for optimization routines, which generally minimize
functions.
A likelihood based on the Chi-square distribution, appropriate for modelling
(averaged) power spectra. Note that this is not the same as the statistic
astronomers commonly call Chi-Square, which is a fit statistic derived from
the Gaussian log-likelihood, defined elsewhere in this module.
Parameters:
freqiterable
Array with frequencies
poweriterable
Array with (averaged/singular) powers corresponding to the
frequencies in freq
modelan astropy.modeling.FittableModel instance
The model to use in the likelihood.
mint
1/2 of the degrees of freedom
Attributes:
xiterable
x-coordinate of the data
yiterable
y-coordinte of the data
yerriterable
the uncertainty on the data, as standard deviation
Evaluate the log-likelihood for a given set of parameters.
Parameters:
parsnumpy.ndarray
An array of parameters at which to evaluate the model
and subsequently the log-likelihood. Note that the
length of this array must match the free parameters in
model, i.e. npar
negbool, optional, default False
If True, return the negative log-likelihood, i.e.
-loglike, rather than loglike. This is useful e.g.
for optimization routines, which generally minimize
functions.
Returns:
loglikefloat
The log(likelihood) value for the data and model.
classstingray.modeling.LaplaceLogLikelihood(x, y, yerr, model)[source]¶
A Laplace likelihood for the cospectrum.
Parameters:
xiterable
Array with independent variable
yiterable
Array with dependent variable
modelan astropy.modeling.FittableModel instance
The model to use in the likelihood.
yerriterable
Array with the uncertainties on y, in standard deviation
Attributes:
xiterable
x-coordinate of the data
yiterable
y-coordinte of the data
yerriterable
the uncertainty on the data, as standard deviation
Evaluate the log-likelihood for a given set of parameters.
Parameters:
parsnumpy.ndarray
An array of parameters at which to evaluate the model
and subsequently the log-likelihood. Note that the
length of this array must match the free parameters in
model, i.e. npar
negbool, optional, default False
If True, return the negative log-likelihood, i.e.
-loglike, rather than loglike. This is useful e.g.
for optimization routines, which generally minimize
functions.
These classes define basic posteriors for parametric modelling of time series and power spectra, based on
the log-likelihood classes defined in Log-Likelihood Classes. stingray.modeling.Posterior is an
abstract base class laying out a basic template for defining posteriors. As with the log-likelihood classes
above, several posterior classes are defined for a variety of data types.
Note that priors are not pre-defined in these classes, since they are problem dependent and should be
set by the user. The convenience function stingray.modeling.set_logprior() can be useful to help set
priors for these posterior classes.
classstingray.modeling.Posterior(x, y, model, **kwargs)[source]¶
The Posterior describes the Bayesian probability distribution of
a set of parameters \(\theta\) given some observed data \(D\) and
some prior assumptions \(I\).
where \(p(D | \theta, I)\) describes the likelihood, i.e. the
sampling distribution of the data and the (parametric) model, and
\(p(\theta | I)\) describes the prior distribution, i.e. our information
about the parameters \(\theta\) before we gathered the data.
The marginal likelihood \(p(D| I)\) describes the probability of
observing the data given the model assumptions, integrated over the
space of all parameters.
Parameters:
xiterable
The abscissa or independent variable of the data. This could
in principle be a multi-dimensional array.
yiterable
The ordinate or dependent variable of the data.
modelastropy.modeling.models instance
The parametric model supposed to represent the data. For details
see the astropy.modeling documentation
kwargs
keyword arguments related to the subclasses of Posterior. For
details, see the documentation of the individual subclasses
References
Sivia, D. S., and J. Skilling. “Data Analysis: A Bayesian Tutorial. 2006.”
Gelman, Andrew, et al. Bayesian data analysis. Vol. 2. Boca Raton, FL, USA: Chapman & Hall/CRC, 2014.
von Toussaint, Udo. “Bayesian inference in physics.” Reviews of Modern Physics 83.3 (2011): 943.
Hogg, David W. “Probability Calculus for inference”. arxiv: 1205.4446
Definition of the log-posterior.
Requires methods loglikelihood and logprior to both
be defined.
Note that loglikelihood is set in the subclass of Posterior
appropriate for your problem at hand, as is logprior.
Parameters:
t0numpy.ndarray
An array of parameters at which to evaluate the model
and subsequently the log-posterior. Note that the
length of this array must match the free parameters in
model, i.e. npar
negbool, optional, default False
If True, return the negative log-posterior, i.e.
-lpost, rather than lpost. This is useful e.g.
for optimization routines, which generally minimize
functions.
Returns:
lpostfloat
The value of the log-posterior for the given parameters t0
classstingray.modeling.GaussianPosterior(x, y, yerr, model, priors=None)[source]¶
A general class for two-dimensional data following a Gaussian
sampling distribution.
Parameters:
xnumpy.ndarray
independent variable
ynumpy.ndarray
dependent variable
yerrnumpy.ndarray
measurement uncertainties for y
modelinstance of any subclass of astropy.modeling.FittableModel
The model for the power spectrum.
priorsdict of form {"parametername":function}, optional
A dictionary with the definitions for the prior probabilities.
For each parameter in model, there must be a prior defined with
a key of the exact same name as stored in model.param_names.
The item for each key is a function definition defining the prior
(e.g. a lambda function or a scipy.stats.distribution.pdf.
If priors=None, then no prior is set. This means priors need
to be added by hand using the set_logprior() function defined in
this module. Note that it is impossible to call a Posterior object
itself or the self.logposterior method without defining a prior.
Definition of the log-posterior.
Requires methods loglikelihood and logprior to both
be defined.
Note that loglikelihood is set in the subclass of Posterior
appropriate for your problem at hand, as is logprior.
Parameters:
t0numpy.ndarray
An array of parameters at which to evaluate the model
and subsequently the log-posterior. Note that the
length of this array must match the free parameters in
model, i.e. npar
negbool, optional, default False
If True, return the negative log-posterior, i.e.
-lpost, rather than lpost. This is useful e.g.
for optimization routines, which generally minimize
functions.
Returns:
lpostfloat
The value of the log-posterior for the given parameters t0
classstingray.modeling.PoissonPosterior(x, y, model, priors=None)[source]¶
Posterior for Poisson light curve data. Primary intended use is for
modelling X-ray light curves, but alternative uses are conceivable.
Parameters:
xnumpy.ndarray
The independent variable (e.g. time stamps of a light curve)
ynumpy.ndarray
The dependent variable (e.g. counts per bin of a light curve)
modelinstance of any subclass of astropy.modeling.FittableModel
The model for the power spectrum.
priorsdict of form {"parametername":function}, optional
A dictionary with the definitions for the prior probabilities.
For each parameter in model, there must be a prior defined with
a key of the exact same name as stored in model.param_names.
The item for each key is a function definition defining the prior
(e.g. a lambda function or a scipy.stats.distribution.pdf.
If priors=None, then no prior is set. This means priors need
to be added by hand using the set_logprior() function defined in
this module. Note that it is impossible to call a Posterior object
itself or the self.logposterior method without defining a prior.
Attributes:
xnumpy.ndarray
The independent variable (list of frequencies) stored in ps.freq
ynumpy.ndarray
The dependent variable (list of powers) stored in ps.power
modelinstance of any subclass of astropy.modeling.FittableModel
Definition of the log-posterior.
Requires methods loglikelihood and logprior to both
be defined.
Note that loglikelihood is set in the subclass of Posterior
appropriate for your problem at hand, as is logprior.
Parameters:
t0numpy.ndarray
An array of parameters at which to evaluate the model
and subsequently the log-posterior. Note that the
length of this array must match the free parameters in
model, i.e. npar
negbool, optional, default False
If True, return the negative log-posterior, i.e.
-lpost, rather than lpost. This is useful e.g.
for optimization routines, which generally minimize
functions.
Returns:
lpostfloat
The value of the log-posterior for the given parameters t0
Posterior distribution for power spectra.
Uses an exponential distribution for the errors in the likelihood,
or a \(\chi^2\) distribution with \(2M\) degrees of freedom, where
\(M\) is the number of frequency bins or power spectra averaged in each bin.
modelinstance of any subclass of astropy.modeling.FittableModel
The model for the power spectrum.
priorsdict of form {"parametername":function}, optional
A dictionary with the definitions for the prior probabilities.
For each parameter in model, there must be a prior defined with
a key of the exact same name as stored in model.param_names.
The item for each key is a function definition defining the prior
(e.g. a lambda function or a scipy.stats.distribution.pdf.
If priors=None, then no prior is set. This means priors need
to be added by hand using the set_logprior() function defined in
this module. Note that it is impossible to call a Posterior object
itself or the self.logposterior method without defining a prior.
mint, default 1
The number of averaged periodograms or frequency bins in ps.
Useful for binned/averaged periodograms, since the value of
m will change the likelihood function!
Definition of the log-posterior.
Requires methods loglikelihood and logprior to both
be defined.
Note that loglikelihood is set in the subclass of Posterior
appropriate for your problem at hand, as is logprior.
Parameters:
t0numpy.ndarray
An array of parameters at which to evaluate the model
and subsequently the log-posterior. Note that the
length of this array must match the free parameters in
model, i.e. npar
negbool, optional, default False
If True, return the negative log-posterior, i.e.
-lpost, rather than lpost. This is useful e.g.
for optimization routines, which generally minimize
functions.
Returns:
lpostfloat
The value of the log-posterior for the given parameters t0
classstingray.modeling.LaplacePosterior(x, y, yerr, model, priors=None)[source]¶
A general class for two-dimensional data following a Gaussian
sampling distribution.
Parameters:
xnumpy.ndarray
independent variable
ynumpy.ndarray
dependent variable
yerrnumpy.ndarray
measurement uncertainties for y, in standard deviation
modelinstance of any subclass of astropy.modeling.FittableModel
The model for the power spectrum.
priorsdict of form {"parametername":function}, optional
A dictionary with the definitions for the prior probabilities.
For each parameter in model, there must be a prior defined with
a key of the exact same name as stored in model.param_names.
The item for each key is a function definition defining the prior
(e.g. a lambda function or a scipy.stats.distribution.pdf.
If priors=None, then no prior is set. This means priors need
to be added by hand using the set_logprior() function defined in
this module. Note that it is impossible to call a Posterior object
itself or the self.logposterior method without defining a prior.
Definition of the log-posterior.
Requires methods loglikelihood and logprior to both
be defined.
Note that loglikelihood is set in the subclass of Posterior
appropriate for your problem at hand, as is logprior.
Parameters:
t0numpy.ndarray
An array of parameters at which to evaluate the model
and subsequently the log-posterior. Note that the
length of this array must match the free parameters in
model, i.e. npar
negbool, optional, default False
If True, return the negative log-posterior, i.e.
-lpost, rather than lpost. This is useful e.g.
for optimization routines, which generally minimize
functions.
Returns:
lpostfloat
The value of the log-posterior for the given parameters t0
These classes implement functionality related to parameter estimation. They define basic fit and
sample methods using scipy.optimize and emcee, respectively, for optimization and Markov Chain Monte
Carlo sampling. stingray.modeling.PSDParEst implements some more advanced functionality for modelling
power spectra, including both frequentist and Bayesian searches for (quasi-)periodic signals.
Parameter estimation of two-dimensional data, either via
optimization or MCMC.
Note: optimization with bounds is not supported. If something like
this is required, define (uniform) priors in the ParametricModel
instances to be used below.
Parameters:
fitmethodstring, optional, default L-BFGS-B
Any of the strings allowed in scipy.optimize.minimize in
the method keyword. Sets the fit method to be used.
max_postbool, optional, default True
If True, then compute the Maximum-A-Posteriori estimate. If False,
compute a Maximum Likelihood estimate.
Calibrate the outcome of a Likelihood Ratio Test via MCMC.
In order to compare models via likelihood ratio test, one generally
aims to compute a p-value for the null hypothesis (generally the
simpler model). There are two special cases where the theoretical
distribution used to compute that p-value analytically given the
observed likelihood ratio (a chi-square distribution) is not
applicable:
the models are not nested (i.e. Model 1 is not a special, simpler
case of Model 2),
the parameter values fixed in Model 2 to retrieve Model 1 are at the
edges of parameter space (e.g. if one must set, say, an amplitude to
zero in order to remove a component in the more complex model, and
negative amplitudes are excluded a priori)
In these cases, the observed likelihood ratio must be calibrated via
simulations of the simpler model (Model 1), using MCMC to take into
account the uncertainty in the parameters. This function does
exactly that: it computes the likelihood ratio for the observed data,
and produces simulations to calibrate the likelihood ratio and
compute a p-value for observing the data under the assumption that
Model 1 istrue.
If max_post=True, the code will use MCMC to sample the posterior
of the parameters and simulate fake data from there.
If max_post=False, the code will use the covariance matrix derived
from the fit to simulate data sets for comparison.
and instance of class Posterior or one of its subclasses
that defines the function to be minimized (either in loglikelihood
or logposterior)
t0{list | numpy.ndarray}
List/array with set of initial parameters
negbool, optional, default True
Boolean to be passed to lpost, setting whether to use the
negative posterior or the negative log-likelihood. Useful for
optimization routines, which are generally defined as minimization routines.
and instance of class Posterior or one of its subclasses
that defines the function to be minimized (either in loglikelihood
or logposterior)
t0iterable
list or array containing the starting parameters. Its length
must match lpost.model.npar.
nwalkersint, optional, default 500
The number of walkers (chains) to use during the MCMC procedure.
The more walkers are used, the slower the estimation will be, but
the better the final distribution is likely to be.
niterint, optional, default 100
The number of iterations to run the MCMC chains for. The larger this
number, the longer the estimation will take, but the higher the
chance that the walkers have actually converged on the true
posterior distribution.
burninint, optional, default 100
The number of iterations to run the walkers before convergence is
assumed to have occurred. This part of the chain will be discarded
before sampling from what is then assumed to be the posterior
distribution desired.
threadsDEPRECATED int, optional, default 1
The number of threads for parallelization.
Default is 1, i.e. no parallelization
With the change to the new emcee version 3, threads is
deprecated. Use the pool keyword argument instead.
This will no longer have any effect.
print_resultsbool, optional, default True
Boolean flag setting whether the results of the MCMC run should
be printed to standard output. Default: True
plotbool, optional, default False
Boolean flag setting whether summary plots of the MCMC chains
should be produced. Default: False
namestrstr, optional, default test
Optional string for output file names for the plotting.
poolbool, default False
If True, use pooling to parallelize the operation.
Parameter estimation for parametric modelling of power spectra.
This class contains functionality that allows parameter estimation
and related tasks that involve fitting a parametric model to an
(averaged) power spectrum.
Calibrate the highest outlier in a data set using MCMC-simulated
power spectra.
In short, the procedure does a MAP fit to the data, computes the
statistic
\[\max{(T_R = 2(\mathrm{data}/\mathrm{model}))}\]
and then does an MCMC run using the data and the model, or generates parameter samples
from the likelihood distribution using the derived covariance in a Maximum Likelihood
fit.
From the (posterior) samples, it generates fake power spectra. Each fake spectrum is fit
in the same way as the data, and the highest data/model outlier extracted as for the data.
The observed value of \(T_R\) can then be directly compared to the simulated
distribution of \(T_R\) values in order to derive a p-value of the null
hypothesis that the observed \(T_R\) is compatible with being generated by
noise.
If a sampler has already been run, the SamplingResults instance can be
fed into this method here, otherwise this method will run a sampler
automatically
max_post: bool, optional, default ``False``
If True, do MAP fits on the power spectrum to find the highest data/model outlier
Otherwise, do a Maximum Likelihood fit. If True, the simulated power spectra will
be generated from an MCMC run, otherwise the method will employ the approximated
covariance matrix for the parameters derived from the likelihood surface to generate
samples from that likelihood function.
nsimint, optional, default 1000
Number of fake power spectra to simulate from the posterior sample. Note that this
number sets the resolution of the resulting p-value. For nsim=1000, the highest
resolution that can be achieved is \(10^{-3}\).
niterint, optional, default 200
If sample is None, this variable will be used to set the number of steps in the
MCMC procedure after burn-in.
nwalkersint, optional, default 500
If sample is None, this variable will be used to set the number of MCMC chains
run in parallel in the sampler.
burninint, optional, default 200
If sample is None, this variable will be used to set the number of burn-in steps
to be discarded in the initial phase of the MCMC run
namestrstr, optional, default test
A string to be used for storing MCMC output and plots to disk
seedint, optional, default None
An optional number to seed the random number generator with, for reproducibility of
the results obtained with this method.
Returns:
pvalfloat
The p-value that the highest data/model outlier is produced by random noise, calibrated
using simulated power spectra from an MCMC run.
References
For more details on the procedure employed here, see
Calibrate the outcome of a Likelihood Ratio Test via MCMC.
In order to compare models via likelihood ratio test, one generally
aims to compute a p-value for the null hypothesis (generally the
simpler model). There are two special cases where the theoretical
distribution used to compute that p-value analytically given the
observed likelihood ratio (a chi-square distribution) is not
applicable:
the models are not nested (i.e. Model 1 is not a special, simpler
case of Model 2),
the parameter values fixed in Model 2 to retrieve Model 1 are at the
edges of parameter space (e.g. if one must set, say, an amplitude to
zero in order to remove a component in the more complex model, and
negative amplitudes are excluded a priori)
In these cases, the observed likelihood ratio must be calibrated via
simulations of the simpler model (Model 1), using MCMC to take into
account the uncertainty in the parameters. This function does
exactly that: it computes the likelihood ratio for the observed data,
and produces simulations to calibrate the likelihood ratio and
compute a p-value for observing the data under the assumption that
Model 1 istrue.
If max_post=True, the code will use MCMC to sample the posterior
of the parameters and simulate fake data from there.
If max_post=False, the code will use the covariance matrix derived
from the fit to simulate data sets for comparison.
and instance of class Posterior or one of its subclasses
that defines the function to be minimized (either in loglikelihood
or logposterior)
t0iterable
list or array containing the starting parameters. Its length
must match lpost.model.npar.
nwalkersint, optional, default 500
The number of walkers (chains) to use during the MCMC procedure.
The more walkers are used, the slower the estimation will be, but
the better the final distribution is likely to be.
niterint, optional, default 100
The number of iterations to run the MCMC chains for. The larger this
number, the longer the estimation will take, but the higher the
chance that the walkers have actually converged on the true
posterior distribution.
burninint, optional, default 100
The number of iterations to run the walkers before convergence is
assumed to have occurred. This part of the chain will be discarded
before sampling from what is then assumed to be the posterior
distribution desired.
threadsint, optional, default 1
The number of threads for parallelization.
Default is 1, i.e. no parallelization
print_resultsbool, optional, default True
Boolean flag setting whether the results of the MCMC run should
be printed to standard output
plotbool, optional, default False
Boolean flag setting whether summary plots of the MCMC chains
should be produced
namestrstr, optional, default test
Optional string for output file names for the plotting.
A list of parameter values derived either from an approximation of the
likelihood surface, or from an MCMC run. Has dimensions (n,ndim), where
n is the number of simulated power spectra to generate, and ndim the
number of model parameters.
an instance of class Posterior or one of its subclasses
that defines the function to be minimized (either in loglikelihood
or logposterior)
t0iterable
list or array containing the starting parameters. Its length
must match lpost.model.npar.
max_post: bool, optional, default ``False``
If True, do MAP fits on the power spectrum to find the highest data/model outlier
Otherwise, do a Maximum Likelihood fit. If True, the simulated power spectra will
be generated from an MCMC run, otherwise the method will employ the approximated
covariance matrix for the parameters derived from the likelihood surface to generate
samples from that likelihood function.
seedint, optional, default None
An optional number to seed the random number generator with, for reproducibility of
the results obtained with this method.
Returns:
max_y_allnumpy.ndarray
An array of maximum outliers for each simulated power spectrum
Simulate likelihood ratios for two given models based on MCMC samples
for the simpler model (i.e. the null hypothesis).
Parameters:
s_allnumpy.ndarray of shape (nsamples,lpost1.npar)
An array with MCMC samples derived from the null hypothesis model in
lpost1. Its second dimension must match the number of free
parameters in lpost1.model.
These are helper classes instantiated by stingray.modeling.ParameterEstimation and its subclasses to
organize the results of model fitting and sampling in a more meaningful, easily accessible way.
Helper class that will contain the results of the regression.
Less fiddly than a dictionary.
Parameters:
lpost: instance of :class:`Posterior` or one of its subclasses
The object containing the function that is being optimized
in the regression
res: instance of ``scipy.OptimizeResult``
The object containing the results from a optimization run
negbool, optional, default True
A flag that sets whether the log-likelihood or negative log-likelihood
is being used
loga logging.getLogger() object, default None
You can pass a pre-defined object for logging, else a new
logger will be instantiated
References
Attributes:
resultfloat
The result of the optimization, i.e. the function value at the
minimum that the optimizer found
p_optiterable
The list of parameters at the minimum found by the optimizer
modelastropy.models.Model instance
The parametric model fit to the data
covnumpy.ndarray
The covariance matrix for the parameters, has shape (len(p_opt),len(p_opt))
errnumpy.ndarray
The standard deviation of the parameters, derived from the diagonal of cov.
Has the same shape as p_opt
mfitnumpy.ndarray
The values of the model for all x
deviancefloat
The deviance, calculated as -2*log(likelihood)
aicfloat
The Akaike Information Criterion, derived from the log(likelihood) and often used
in model comparison between non-nested models;
For more details, see [7]
bicfloat
The Bayesian Information Criterion, derived from the log(likelihood) and often used
in model comparison between non-nested models;
For more details, see [8]
meritfloat
sum of squared differences between data and model, normalized by the
model values
dofint
The number of degrees of freedom in the problem, defined as the number of
data points - the number of parameters
sexpint
2*(numberofparameters)*(numberofdatapoints)
ssdfloat
sqrt(2*(sexp)), expected sum of data-model residuals
Compute the covariance of the parameters using inverse of the Hessian, i.e.
the second-order derivative of the log-likelihood. Also calculates an estimate
of the standard deviation in the parameters, using the square root of the diagonal
of the covariance matrix.
The Hessian is either estimated directly by the chosen method of fitting, or
approximated using the statsmodelapprox_hess function.
Parameters:
lpost: instance of :class:`Posterior` or one of its subclasses
The object containing the function that is being optimized
in the regression
res: instance of ``scipy``’s ``OptimizeResult`` class
The object containing the results from a optimization run
Helper class that will contain the results of the sampling
in a handy format.
Less fiddly than a dictionary.
Parameters:
sampler: ``emcee.EnsembleSampler`` object
The object containing the sampler that’s done all the work.
ci_min: float out of [0,100]
The lower bound percentile for printing credible intervals
on the parameters
ci_max: float out of [0,100]
The upper bound percentile for printing credible intervals
on the parameters
loga logging.getLogger() object, default None
You can pass a pre-defined object for logging, else a new
logger will be instantiated
References
Attributes:
samplesnumpy.ndarray
An array of samples from the MCMC run, including all chains
flattened into one long (nwalkers*niter, ndim) array
nwalkersint
The number of chains used in the MCMC procedure
niterint
The number of MCMC iterations in each chain
ndimint
The dimensionality of the problem, i.e. the number of
parameters in the model
acceptancefloat
The mean acceptance ratio, calculated over all chains
Lfloat
The product of acceptance ratio and number of samples
acorfloat
The autocorrelation length for the chains; should be shorter
than the chains themselves for independent sampling
rhatfloat
weighted average of between-sequence variance and within-sequence
variance; Gelman-Rubin convergence statistic [11]
meannumpy.ndarray
An array of size ndim, with the posterior means of the parameters
derived from the MCMC chains
stdnumpy.ndarray
An array of size ndim with the posterior standard deviations of
the parameters derived from the MCMC chains
cinumpy.ndarray
An array of shape (ndim,2) containing the lower and upper bounds
of the credible interval (the Bayesian equivalent of the confidence
interval) for each parameter using the bounds set by ci_min and ci_max
Compute common statistics for convergence of the MCMC
chains. While you can never be completely sure that your chains
converged, these present reasonable heuristics to give an
indication whether convergence is very far off or reasonably close.
Currently implemented are the autocorrelation time [12] and the
Gelman-Rubin convergence criterion [13].
Infer the Posterior means, standard deviations and credible intervals
(i.e. the Bayesian equivalent to confidence intervals) from the Posterior samples
for each parameter.
Parameters:
ci_minfloat
Lower bound to the credible interval, given as percentage between
0 and 100
ci_maxfloat
Upper bound to the credible interval, given as percentage between
0 and 100
These functions are designed to help the user perform common tasks related to modelling and parameter
estimation. In particular, the function stingray.modeling.set_logprior() is designed to
help users set priors in their stingray.modeling.Posterior subclass objects.
This function constructs the logprior method required to successfully
use a Posterior object.
All instances of class Posterior and its subclasses require to implement a
logprior methods. However, priors are strongly problem-dependent and
therefore usually user-defined.
This function allows for setting the logprior method on any instance
of class Posterior efficiently by allowing the user to pass a
dictionary of priors and an instance of class Posterior.
An instance of class Posterior or any of its subclasses
priorsdict
A dictionary containing the prior definitions. Keys are parameter
names as defined by the astropy.models.FittableModel instance supplied
to the model parameter in Posterior. Items are functions
that take a parameter as input and return the log-prior probability
of that parameter.
Fit a number of Lorentzians to a cross spectrum, possibly including white
noise. Each Lorentzian has three parameters (amplitude, centroid position,
full-width at half maximum), plus one extra parameter if the white noise
level should be fit as well. Priors for each parameter can be included in
case max_post=True, in which case the function will attempt a
Maximum-A-Posteriori fit. Priors must be specified as a dictionary with one
entry for each parameter.
The parameter names are (amplitude_i,x_0_i,fwhm_i) for each i out of
a total of N Lorentzians. The white noise level has a parameter
amplitude_(N+1). For example, a model with two Lorentzians and a
white noise level would have parameters:
[amplitude_0, x_0_0, fwhm_0, amplitude_1, x_0_1, fwhm_1, amplitude_2].
Parameters:
csCrossspectrum
A Crossspectrum object with the data to be fit
model: astropy.modeling.models class instance
The parametric model supposed to represent the data. For details
see the astropy.modeling documentation
starting_parsiterable, optional, default None
The list of starting guesses for the optimizer. If it is not provided,
then default parameters are taken from model. See explanation above
for ordering of parameters in this list.
max_postbool, optional, default False
If True, perform a Maximum-A-Posteriori fit of the data rather than a
Maximum Likelihood fit. Note that this requires priors to be specified,
otherwise this will cause an exception!
priors{dict | None}, optional, default None
Dictionary with priors for the MAP fit. This should be of the form
{“parameter name”: probability distribution, …}
fitmethodstring, optional, default “L-BFGS-B”
Specifies an optimization algorithm to use. Supply any valid option for
scipy.optimize.minimize.
Returns:
parestPSDParEst object
A PSDParEst object for further analysis
resOptimizationResults object
The OptimizationResults object storing useful results and quantities
relating to the fit
Fit a number of Lorentzians to a power spectrum, possibly including white
noise. Each Lorentzian has three parameters (amplitude, centroid position,
full-width at half maximum), plus one extra parameter if the white noise
level should be fit as well. Priors for each parameter can be included in
case max_post=True, in which case the function will attempt a
Maximum-A-Posteriori fit. Priors must be specified as a dictionary with one
entry for each parameter.
The parameter names are (amplitude_i,x_0_i,fwhm_i) for each i out of
a total of N Lorentzians. The white noise level has a parameter
amplitude_(N+1). For example, a model with two Lorentzians and a
white noise level would have parameters:
[amplitude_0, x_0_0, fwhm_0, amplitude_1, x_0_1, fwhm_1, amplitude_2].
Parameters:
psPowerspectrum
A Powerspectrum object with the data to be fit
nlorint
The number of Lorentzians to fit
starting_parsiterable
The list of starting guesses for the optimizer. If it is not provided,
then default parameters are taken from model. See explanation above
for ordering of parameters in this list.
fit_whitenoisebool, optional, default True
If True, the code will attempt to fit a white noise level along with
the Lorentzians. Be sure to include a starting parameter for the
optimizer in starting_pars!
max_postbool, optional, default False
If True, perform a Maximum-A-Posteriori fit of the data rather than a
Maximum Likelihood fit. Note that this requires priors to be specified,
otherwise this will cause an exception!
priors{dict | None}, optional, default None
Dictionary with priors for the MAP fit. This should be of the form
{“parameter name”: probability distribution, …}
fitmethodstring, optional, default “L-BFGS-B”
Specifies an optimization algorithm to use. Supply any valid option for
scipy.optimize.minimize.
Returns:
parestPSDParEst object
A PSDParEst object for further analysis
resOptimizationResults object
The OptimizationResults object storing useful results and quantities
relating to the fit
Examples
We start by making an example power spectrum with three Lorentzians
>>> np.random.seed(400)
>>> nlor = 3
Now we have to guess starting parameters. For each Lorentzian, we have
amplitude, centroid position and fwhm, and this pattern repeats for each
Lorentzian in the fit. The white noise level is the last parameter.
>>> t0 = [150, 0.4, 0.2, 50, 2.3, 0.6, 20, 8.0, 0.4, 2.1]
We’re ready for doing the fit:
>>> parest, res = fit_lorentzians(ps, nlor, t0)
res contains a whole array of useful information about the fit, for
example the parameters at the optimum:
>>> p_opt = res.p_opt
Fit a number of Lorentzians to a power spectrum, possibly including white
noise. Each Lorentzian has three parameters (amplitude, centroid position,
full-width at half maximum), plus one extra parameter if the white noise
level should be fit as well. Priors for each parameter can be included in
case max_post=True, in which case the function will attempt a
Maximum-A-Posteriori fit. Priors must be specified as a dictionary with one
entry for each parameter.
The parameter names are (amplitude_i,x_0_i,fwhm_i) for each i out of
a total of N Lorentzians. The white noise level has a parameter
amplitude_(N+1). For example, a model with two Lorentzians and a
white noise level would have parameters:
[amplitude_0, x_0_0, fwhm_0, amplitude_1, x_0_1, fwhm_1, amplitude_2].
Parameters:
psPowerspectrum
A Powerspectrum object with the data to be fit
model: astropy.modeling.models class instance
The parametric model supposed to represent the data. For details
see the astropy.modeling documentation
starting_parsiterable, optional, default None
The list of starting guesses for the optimizer. If it is not provided,
then default parameters are taken from model. See explanation above
for ordering of parameters in this list.
fit_whitenoisebool, optional, default True
If True, the code will attempt to fit a white noise level along with
the Lorentzians. Be sure to include a starting parameter for the
optimizer in starting_pars!
max_postbool, optional, default False
If True, perform a Maximum-A-Posteriori fit of the data rather than a
Maximum Likelihood fit. Note that this requires priors to be specified,
otherwise this will cause an exception!
priors{dict | None}, optional, default None
Dictionary with priors for the MAP fit. This should be of the form
{“parameter name”: probability distribution, …}
fitmethodstring, optional, default “L-BFGS-B”
Specifies an optimization algorithm to use. Supply any valid option for
scipy.optimize.minimize.
Returns:
parestPSDParEst object
A PSDParEst object for further analysis
resOptimizationResults object
The OptimizationResults object storing useful results and quantities
relating to the fit
Examples
We start by making an example power spectrum with three Lorentzians
>>> m = 1
>>> nfreq = 100000
>>> freq = np.linspace(1, 1000, nfreq)
>>> np.random.seed(100)# set the seed for the random number generator>>> noise=np.random.exponential(size=nfreq)
Now we have to guess starting parameters. For each Lorentzian, we have
amplitude, centroid position and fwhm, and this pattern repeats for each
Lorentzian in the fit. The white noise level is the last parameter.
>>> t0 = [80, 1.5, 2.5]
Let’s also make a model to test:
>>> model_to_test = models.PowerLaw1D() + models.Const1D()
>>> model_to_test.amplitude_1.fixed = True
We’re ready for doing the fit:
>>> parest, res = fit_powerspectrum(ps, model_to_test, t0)
res contains a whole array of useful information about the fit, for
example the parameters at the optimum:
>>> p_opt = res.p_opt
Performs epoch folding at trial frequencies in photon data.
If no exposure correction is needed and numba is installed, it uses a fast
algorithm to perform the folding. Otherwise, it runs a much slower
algorithm, which however yields a more precise result.
The search can be done in segments and the results averaged. Use
segment_size to control this
Parameters:
timesarray-like
the event arrival times
frequenciesarray-like
the trial values for the frequencies
Returns:
(fgrid, stats) or (fgrid, fdgrid, stats), as follows:
fgridarray-like
frequency grid of the epoch folding periodogram
fdgridarray-like
frequency derivative grid. Only returned if fdots is an array.
statsarray-like
the epoch folding statistics corresponding to each frequency bin.
Other Parameters:
nbinint
the number of bins of the folded profiles
segment_sizefloat
the length of the segments to be averaged in the periodogram
fdotsarray-like
trial values of the first frequency derivative (optional)
expocorrbool
correct for the exposure (Use it if the period is comparable to the
length of the good time intervals). If True, GTIs have to be specified
via the gti keyword
gti[[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good time intervals
weightsarray-like
weight for each time. This might be, for example, the number of counts
if the times array contains the time bins of a light curve
By default, the keyword times accepts a list of
unbinned photon arrival times. If the input data is
a (binned) light curve, then times will contain the
time stamps of the observation, and weights should
be set to the corresponding fluxes or counts.
Parameters:
timesarray of floats
Photon arrival times, or, if weights is set,
time stamps of a light curve.
f, fdot, fddot…float
The frequency and any number of derivatives.
Returns:
phase_binsarray of floats
The phases corresponding to the pulse profile
profilearray of floats
The pulse profile
profile_errarray of floats
The uncertainties on the pulse profile
Other Parameters:
nbinint, optional, default 16
The number of bins in the pulse profile
weightsfloat or array of floats, optional
The weights of the data. It can either be specified as a single value
for all points, or an array with the same length as time
Correct each bin for exposure (use when the period of the pulsar is
comparable to that of GTIs)
modestr, [“ef”, “pdm”], default “ef”
Whether to calculate the epoch folding or phase dispersion
minimization folded profile. For “ef”, it calculates the (weighted)
sum of the data points in each phase bin, for “pdm”, the variance
in each phase bin
htest-test statistic, a` la De Jager+89, A&A, 221, 180D, eq. 2.
If datatype is “binned” or “gauss”, uses the formulation from
Bachetti+2021, ApJ, arxiv:2012.11397
Parameters:
dataarray of floats
Phase values or binned flux values
nmaxint, default 20
Maximum of harmonics for Z^2_n
Returns:
Mint
The best number of harmonics that describe the signal.
htestfloat
The htest statistics of the events.
Other Parameters:
datatypestr
The datatype of data: “events” if phase values between 0 and 1,
“binned” if folded pulse profile from photons, “gauss” if
folded pulse profile with normally-distributed fluxes
errfloat
The uncertainty on the pulse profile fluxes (required for
datatype=”gauss”, ignored otherwise)
Performs folding at trial frequencies in time series data (i.e.~a light curve
of flux or photon counts) and computes the Phase Dispersion Minimization statistic.
If no exposure correction is needed and numba is installed, it uses a fast
algorithm to perform the folding. Otherwise, it runs a much slower
algorithm, which however yields a more precise result.
The search can be done in segments and the results averaged. Use
segment_size to control this
Parameters:
timesarray-like
the time stamps of the time series
fluxarray-like
the flux or photon count values of the time series
frequenciesarray-like
the trial values for the frequencies
Returns:
(fgrid, stats) or (fgrid, fdgrid, stats), as follows:
fgridarray-like
frequency grid of the epoch folding periodogram
fdgridarray-like
frequency derivative grid. Only returned if fdots is an array.
statsarray-like
the epoch folding statistics corresponding to each frequency bin.
Other Parameters:
nbinint
the number of bins of the folded profiles
segment_sizefloat
the length of the segments to be averaged in the periodogram
fdotsarray-like
trial values of the first frequency derivative (optional)
expocorrbool
correct for the exposure (Use it if the period is comparable to the
length of the good time intervals). If True, GTIs have to be specified
via the gti keyword
Calculate and plot the phaseogram of a pulsar observation.
The phaseogram is a 2-D histogram where the x axis is the pulse phase and
the y axis is the time. It shows how the pulse phase changes with time, and
it is very useful to see if the pulse solution is correct and/or if there
are additional frequency derivatives appearing in the data (due to spin up
or down, or even orbital motion)
Parameters:
timesarray
Event arrival times
ffloat
Pulse frequency
Returns:
phaseogr2-D matrix
The phaseogram
phasesarray-like
The x axis of the phaseogram (the x bins of the histogram),
corresponding to the pulse phase in each column
timesarray-like
The y axis of the phaseogram (the y bins of the histogram),
corresponding to the time at each row
additional_infodict
Additional information, like the pulse profile and the axes to modify
the plot (the latter, only if return_plot is True)
Other Parameters:
nphint
Number of phase bins
ntint
Number of time bins
ph0float
The starting phase of the pulse
mjdreffloat
MJD reference time. If given, the y axis of the plot will be in MJDs,
otherwise it will be in seconds.
fdotfloat
First frequency derivative
fddotfloat
Second frequency derivative
pepochfloat
If the input pulse solution is referred to a given time, give it here.
It has no effect (just a phase shift of the pulse) if fdot is zero.
if mjdref is specified, pepoch MUST be in MJD
weightsarray
Weight for each time
plotbool
Return the axes in the additional_info, and don’t close the plot, so
that the user can add information to it.
Search peaks above threshold in an epoch folding periodogram.
If more values of stat are above threshold and are contiguous, only the
largest one is returned (see Examples).
Parameters:
xarray-like
The x axis of the periodogram (frequencies, periods, …)
statarray-like
The y axis. It must have the same shape as x
thresholdfloat
The threshold value over which we look for peaks in the stat array
Returns:
best_xarray-like
the array containing the x position of the peaks above threshold. If no
peaks are above threshold, an empty list is returned. The array is
sorted by inverse value of stat
best_statarray-like
for each best_x, give the corresponding stat value. Empty if no peaks
above threshold.
Examples
>>> # Test multiple peaks>>> x=np.arange(10)>>> stat=[0,0,0.5,0,0,1,1,2,1,0]>>> best_x,best_stat=search_best_peaks(x,stat,0.5)>>> len(best_x)2>>> assertnp.isclose(best_x[0],7.0)>>> assertnp.isclose(best_x[1],2.0)>>> stat=[0,0,2.5,0,0,1,1,2,1,0]>>> best_x,best_stat=search_best_peaks(x,stat,0.5)>>> assertnp.isclose(best_x[0],2.0)>>> # Test no peak above threshold>>> x=np.arange(10)>>> stat=[0,0,0.4,0,0,0,0,0,0,0]>>> best_x,best_stat=search_best_peaks(x,stat,0.5)>>> best_x[]>>> best_stat[]
This method builds arguments for and then calls pytest.main.
Parameters:
packagestr, optional
The name of a specific package to test, e.g. ‘io.fits’ or
‘utils’. Accepts comma separated string to specify multiple
packages. If nothing is specified all default tests are run.
argsstr, optional
Additional arguments to be passed to pytest.main in the args
keyword argument.
docs_pathstr, optional
The path to the documentation .rst files.
parallelint or ‘auto’, optional
When provided, run the tests in parallel on the specified
number of CPUs. If parallel is 'auto', it will use the all
the cores on the machine. Requires the pytest-xdist plugin.
pastebin(‘failed’, ‘all’, None), optional
Convenience option for turning on pytest pastebin output. Set to
‘failed’ to upload info for failed tests, or ‘all’ to upload info
for all tests.
pdbbool, optional
Turn on PDB post-mortem analysis for failing tests. Same as
specifying --pdb in args.
pep8bool, optional
Turn on PEP8 checking via the pytest-pep8 plugin and disable normal
tests. Same as specifying --pep8-kpep8 in args.
pluginslist, optional
Plugins to be passed to pytest.main in the plugins keyword
argument.
remote_data{‘none’, ‘astropy’, ‘any’}, optional
Controls whether to run tests marked with @pytest.mark.remote_data. This can be
set to run no tests with remote data (none), only ones that use
data from http://data.astropy.org (astropy), or all tests that
use remote data (any). The default is none.
When True, skips running the doctests in the .rst files.
test_pathstr, optional
Specify location to test by path. May be a single file or
directory. Must be specified absolutely or relative to the
calling directory.
verbosebool, optional
Convenience option to turn on verbose output from pytest. Passing
True is the same as specifying -v in args.
stingray.pulse.z_n(data, n, datatype='events', err=None, norm=None)[source]¶
Z^2_n statistics, a` la Buccheri+83, A&A, 128, 245, eq. 2.
If datatype is “binned” or “gauss”, uses the formulation from
Bachetti+2021, ApJ, arxiv:2012.11397
Parameters:
dataarray of floats
Phase values or binned flux values
nint, default 2
Number of harmonics, including the fundamental
Returns:
z2_nfloat
The Z^2_n statistics of the events.
Other Parameters:
datatypestr
The data type: “events” if phase values between 0 and 1,
“binned” if folded pulse profile from photons, “gauss” if
folded pulse profile with normally-distributed fluxes
errfloat
The uncertainty on the pulse profile fluxes (required for
datatype=”gauss”, ignored otherwise)
normfloat
For backwards compatibility; if norm is not None, it is
substituted to data, and data is ignored. This raises
a DeprecationWarning
Calculates the Z^2_n statistics at trial frequencies in photon data.
The “real” Z^2_n statistics is very slow. Therefore, in this function data
are folded first, and then the statistics is calculated using the value of
the profile as an additional normalization term.
The two methods are mostly equivalent. However, the number of bins has to
be chosen wisely: if the number of bins is too small, the search for high
harmonics is ineffective.
If no exposure correction is needed and numba is installed, it uses a fast
algorithm to perform the folding. Otherwise, it runs a much slower
algorithm, which however yields a more precise result.
The search can be done in segments and the results averaged. Use
segment_size to control this
Parameters:
timesarray-like
the event arrival times
frequenciesarray-like
the trial values for the frequencies
Returns:
(fgrid, stats) or (fgrid, fdgrid, stats), as follows:
fgridarray-like
frequency grid of the epoch folding periodogram
fdgridarray-like
frequency derivative grid. Only returned if fdots is an array.
statsarray-like
the Z^2_n statistics corresponding to each frequency bin.
Other Parameters:
nbinint
the number of bins of the folded profiles
segment_sizefloat
the length of the segments to be averaged in the periodogram
fdotsarray-like
trial values of the first frequency derivative (optional)
expocorrbool
correct for the exposure (Use it if the period is comparable to the
length of the good time intervals.)
gti[[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good time intervals
weightsarray-like
weight for each time. This might be, for example, the number of counts
if the times array contains the time bins of a light curve
This submodule defines extensive functionality related to simulating spectral-timing data sets,
including transfer and window functions, simulating light curves from power spectra for a range
of stochastic processes.
classstingray.simulator.simulator.Simulator(dt, N, mean, rms, err=0.0, red_noise=1, random_state=None, tstart=0.0, poisson=False)[source]¶
Methods to simulate and visualize light curves.
TODO: Improve documentation
Parameters:
dtint, default 1
time resolution of simulated light curve
Nint, default 1024
bins count of simulated light curve
meanfloat, default 0
mean value of the simulated light curve
rmsfloat, default 1
fractional rms of the simulated light curve,
actual rms is calculated by mean*rms
errfloat, default 0
the errorbars on the final light curve
red_noiseint, default 1
multiple of real length of light curve, by
which to simulate, to avoid red noise leakage
Simulate light curve generation using power spectrum or
impulse response.
Parameters:
args
See examples below.
Returns:
lightCurveLightCurve object
Examples
x = simulate(beta):
For generating a light curve using power law spectrum.
Parameters:
beta : float
Defines the shape of spectrum
x = simulate(s):
For generating a light curve from user-provided spectrum.
Note: In this case, the red_noise parameter is provided.
You can generate a longer light curve by providing a higher
frequency resolution on the input power spectrum.
Parameters:
s : array-like
power spectrum
x = simulate(model):
For generating a light curve from pre-defined model
Parameters:
model : astropy.modeling.Model
the pre-defined model
x = simulate(‘model’, params):
For generating a light curve from pre-defined model
Parameters:
model : string
the pre-defined model
params : list iterable or dict
the parameters for the pre-defined model
x = simulate(s, h):
For generating a light curve using impulse response.
Parameters:
s : array-like
Underlying variability signal
h : array-like
Impulse response
x = simulate(s, h, ‘same’):
For generating a light curve of same length as input signal,
using impulse response.
Parameters:
s : array-like
Underlying variability signal
h : array-like
Impulse response
mode : str
mode can be ‘same’, ‘filtered, or ‘full’.
‘same’ indicates that the length of output light
curve is same as that of input signal.
‘filtered’ means that length of output light curve
is len(s) - lag_delay
‘full’ indicates that the length of output light
curve is len(s) + len(h) -1