Search is not available for this dataset
text
stringlengths 75
104k
|
---|
def bls_parallel_pfind(
times, mags, errs,
magsarefluxes=False,
startp=0.1, # by default, search from 0.1 d to...
endp=100.0, # ... 100.0 d -- don't search full timebase
stepsize=1.0e-4,
mintransitduration=0.01, # minimum transit length in phase
maxtransitduration=0.4, # maximum transit length in phase
ndurations=100,
autofreq=True, # figure out f0, nf, and df automatically
blsobjective='likelihood',
blsmethod='fast',
blsoversample=5,
blsmintransits=3,
blsfreqfactor=10.0,
nbestpeaks=5,
periodepsilon=0.1, # 0.1
sigclip=10.0,
verbose=True,
nworkers=None,
):
'''Runs the Box Least Squares Fitting Search for transit-shaped signals.
Breaks up the full frequency space into chunks and passes them to parallel
BLS workers.
Based on the version of BLS in Astropy 3.1:
`astropy.stats.BoxLeastSquares`. If you don't have Astropy 3.1, this module
will fail to import. Note that by default, this implementation of
`bls_parallel_pfind` doesn't use the `.autoperiod()` function from
`BoxLeastSquares` but uses the same auto frequency-grid generation as the
functions in `periodbase.kbls`. If you want to use Astropy's implementation,
set the value of `autofreq` kwarg to 'astropy'. The generated period array
will then be broken up into chunks and sent to the individual workers.
NOTE: the combined BLS spectrum produced by this function is not identical
to that produced by running BLS in one shot for the entire frequency
space. There are differences on the order of 1.0e-3 or so in the respective
peak values, but peaks appear at the same frequencies for both methods. This
is likely due to different aliasing caused by smaller chunks of the
frequency space used by the parallel workers in this function. When in
doubt, confirm results for this parallel implementation by comparing to
those from the serial implementation above.
In particular, when you want to get reliable estimates of the SNR, transit
depth, duration, etc. that Astropy's BLS gives you, rerun `bls_serial_pfind`
with `startp`, and `endp` close to the best period you want to characterize
the transit at. The dict returned from that function contains a `blsmodel`
key, which is the generated model from Astropy's BLS. Use the
`.compute_stats()` method to calculate the required stats.
Parameters
----------
times,mags,errs : np.array
The magnitude/flux time-series to search for transits.
magsarefluxes : bool
If the input measurement values in `mags` and `errs` are in fluxes, set
this to True.
startp,endp : float
The minimum and maximum periods to consider for the transit search.
stepsize : float
The step-size in frequency to use when constructing a frequency grid for
the period search.
mintransitduration,maxtransitduration : float
The minimum and maximum transitdurations (in units of phase) to consider
for the transit search.
ndurations : int
The number of transit durations to use in the period-search.
autofreq : bool or str
If this is True, the values of `stepsize` and `nphasebins` will be
ignored, and these, along with a frequency-grid, will be determined
based on the following relations::
nphasebins = int(ceil(2.0/mintransitduration))
if nphasebins > 3000:
nphasebins = 3000
stepsize = 0.25*mintransitduration/(times.max()-times.min())
minfreq = 1.0/endp
maxfreq = 1.0/startp
nfreq = int(ceil((maxfreq - minfreq)/stepsize))
If this is False, you must set `startp`, `endp`, and `stepsize` as
appropriate.
If this is str == 'astropy', will use the
`astropy.stats.BoxLeastSquares.autoperiod()` function to calculate the
frequency grid instead of the kbls method.
blsobjective : {'likelihood','snr'}
Sets the type of objective to optimize in the `BoxLeastSquares.power()`
function.
blsmethod : {'fast','slow'}
Sets the type of method to use in the `BoxLeastSquares.power()`
function.
blsoversample : {'likelihood','snr'}
Sets the `oversample` kwarg for the `BoxLeastSquares.power()` function.
blsmintransits : int
Sets the `min_n_transits` kwarg for the `BoxLeastSquares.autoperiod()`
function.
blsfreqfactor : float
Sets the `frequency_factor` kwarg for the `BoxLeastSquares.autoperiod()`
function.
periodepsilon : float
The fractional difference between successive values of 'best' periods
when sorting by periodogram power to consider them as separate periods
(as opposed to part of the same periodogram peak). This is used to avoid
broad peaks in the periodogram and make sure the 'best' periods returned
are all actually independent.
nbestpeaks : int
The number of 'best' peaks to return from the periodogram results,
starting from the global maximum of the periodogram peak values.
sigclip : float or int or sequence of two floats/ints or None
If a single float or int, a symmetric sigma-clip will be performed using
the number provided as the sigma-multiplier to cut out from the input
time-series.
If a list of two ints/floats is provided, the function will perform an
'asymmetric' sigma-clip. The first element in this list is the sigma
value to use for fainter flux/mag values; the second element in this
list is the sigma value to use for brighter flux/mag values. For
example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma
dimmings and greater than 3-sigma brightenings. Here the meaning of
"dimming" and "brightening" is set by *physics* (not the magnitude
system), which is why the `magsarefluxes` kwarg must be correctly set.
If `sigclip` is None, no sigma-clipping will be performed, and the
time-series (with non-finite elems removed) will be passed through to
the output.
verbose : bool
If this is True, will indicate progress and details about the frequency
grid used for the period search.
nworkers : int or None
The number of parallel workers to launch for period-search. If None,
nworkers = NCPUS.
Returns
-------
dict
This function returns a dict, referred to as an `lspinfo` dict in other
astrobase functions that operate on periodogram results. This is a
standardized format across all astrobase period-finders, and is of the
form below::
{'bestperiod': the best period value in the periodogram,
'bestlspval': the periodogram peak associated with the best period,
'nbestpeaks': the input value of nbestpeaks,
'nbestlspvals': nbestpeaks-size list of best period peak values,
'nbestperiods': nbestpeaks-size list of best periods,
'lspvals': the full array of periodogram powers,
'frequencies': the full array of frequencies considered,
'periods': the full array of periods considered,
'durations': the array of durations used to run BLS,
'blsresult': Astropy BLS result object (BoxLeastSquaresResult),
'blsmodel': Astropy BLS BoxLeastSquares object used for work,
'stepsize': the actual stepsize used,
'nfreq': the actual nfreq used,
'durations': the durations array used,
'mintransitduration': the input mintransitduration,
'maxtransitduration': the input maxtransitdurations,
'method':'bls' -> the name of the period-finder method,
'kwargs':{ dict of all of the input kwargs for record-keeping}}
'''
# get rid of nans first and sigclip
stimes, smags, serrs = sigclip_magseries(times,
mags,
errs,
magsarefluxes=magsarefluxes,
sigclip=sigclip)
# make sure there are enough points to calculate a spectrum
if len(stimes) > 9 and len(smags) > 9 and len(serrs) > 9:
# if we're setting up everything automatically
if isinstance(autofreq, bool) and autofreq:
# use heuristic to figure out best timestep
stepsize = 0.25*mintransitduration/(stimes.max()-stimes.min())
# now figure out the frequencies to use
minfreq = 1.0/endp
maxfreq = 1.0/startp
nfreq = int(npceil((maxfreq - minfreq)/stepsize))
# say what we're using
if verbose:
LOGINFO('min P: %s, max P: %s, nfreq: %s, '
'minfreq: %s, maxfreq: %s' % (startp, endp, nfreq,
minfreq, maxfreq))
LOGINFO('autofreq = True: using AUTOMATIC values for '
'freq stepsize: %s, ndurations: %s, '
'min transit duration: %s, max transit duration: %s' %
(stepsize, ndurations,
mintransitduration, maxtransitduration))
use_autoperiod = False
elif isinstance(autofreq, bool) and not autofreq:
minfreq = 1.0/endp
maxfreq = 1.0/startp
nfreq = int(npceil((maxfreq - minfreq)/stepsize))
# say what we're using
if verbose:
LOGINFO('min P: %s, max P: %s, nfreq: %s, '
'minfreq: %s, maxfreq: %s' % (startp, endp, nfreq,
minfreq, maxfreq))
LOGINFO('autofreq = False: using PROVIDED values for '
'freq stepsize: %s, ndurations: %s, '
'min transit duration: %s, max transit duration: %s' %
(stepsize, ndurations,
mintransitduration, maxtransitduration))
use_autoperiod = False
elif isinstance(autofreq, str) and autofreq == 'astropy':
use_autoperiod = True
minfreq = 1.0/endp
maxfreq = 1.0/startp
else:
LOGERROR("unknown autofreq kwarg encountered. can't continue...")
return None
# check the minimum frequency
if minfreq < (1.0/(stimes.max() - stimes.min())):
minfreq = 2.0/(stimes.max() - stimes.min())
if verbose:
LOGWARNING('the requested max P = %.3f is larger than '
'the time base of the observations = %.3f, '
' will make minfreq = 2 x 1/timebase'
% (endp, stimes.max() - stimes.min()))
LOGINFO('new minfreq: %s, maxfreq: %s' %
(minfreq, maxfreq))
#############################
## NOW RUN BLS IN PARALLEL ##
#############################
# fix number of CPUs if needed
if not nworkers or nworkers > NCPUS:
nworkers = NCPUS
if verbose:
LOGINFO('using %s workers...' % nworkers)
# check if autoperiod is True and get the correct period-grid
if use_autoperiod:
# astropy's BLS requires durations in units of time
durations = nplinspace(mintransitduration*startp,
maxtransitduration*startp,
ndurations)
# set up the correct units for the BLS model
if magsarefluxes:
blsmodel = BoxLeastSquares(
stimes*u.day,
smags*u.dimensionless_unscaled,
dy=serrs*u.dimensionless_unscaled
)
else:
blsmodel = BoxLeastSquares(
stimes*u.day,
smags*u.mag,
dy=serrs*u.mag
)
periods = nparray(
blsmodel.autoperiod(
durations*u.day,
minimum_period=startp,
maximum_period=endp,
minimum_n_transit=blsmintransits,
frequency_factor=blsfreqfactor
)
)
frequencies = 1.0/periods
nfreq = frequencies.size
if verbose:
LOGINFO(
"autofreq = 'astropy', used .autoperiod() with "
"minimum_n_transit = %s, freq_factor = %s "
"to generate the frequency grid" %
(blsmintransits, blsfreqfactor)
)
LOGINFO('stepsize = %s, nfreq = %s, minfreq = %.5f, '
'maxfreq = %.5f, ndurations = %s' %
(abs(frequencies[1] - frequencies[0]),
nfreq,
1.0/periods.max(),
1.0/periods.min(),
durations.size))
del blsmodel
del durations
# otherwise, use kbls method
else:
frequencies = minfreq + nparange(nfreq)*stepsize
# break up the tasks into chunks
csrem = int(fmod(nfreq, nworkers))
csint = int(float(nfreq/nworkers))
chunk_minfreqs, chunk_nfreqs = [], []
for x in range(nworkers):
this_minfreqs = frequencies[x*csint]
# handle usual nfreqs
if x < (nworkers - 1):
this_nfreqs = frequencies[x*csint:x*csint+csint].size
else:
this_nfreqs = frequencies[x*csint:x*csint+csint+csrem].size
chunk_minfreqs.append(this_minfreqs)
chunk_nfreqs.append(this_nfreqs)
# populate the tasks list
#
# task[0] = times
# task[1] = mags
# task[2] = errs
# task[3] = magsarefluxes
# task[4] = minfreq
# task[5] = nfreq
# task[6] = stepsize
# task[7] = nphasebins
# task[8] = mintransitduration
# task[9] = maxtransitduration
# task[10] = blsobjective
# task[11] = blsmethod
# task[12] = blsoversample
# populate the tasks list
tasks = [(stimes, smags, serrs, magsarefluxes,
chunk_minf, chunk_nf, stepsize,
ndurations, mintransitduration, maxtransitduration,
blsobjective, blsmethod, blsoversample)
for (chunk_minf, chunk_nf)
in zip(chunk_minfreqs, chunk_nfreqs)]
if verbose:
for ind, task in enumerate(tasks):
LOGINFO('worker %s: minfreq = %.6f, nfreqs = %s' %
(ind+1, task[4], task[5]))
LOGINFO('running...')
# return tasks
# start the pool
pool = Pool(nworkers)
results = pool.map(_parallel_bls_worker, tasks)
pool.close()
pool.join()
del pool
# now concatenate the output lsp arrays
lsp = npconcatenate([x['power'] for x in results])
periods = 1.0/frequencies
# find the nbestpeaks for the periodogram: 1. sort the lsp array
# by highest value first 2. go down the values until we find
# five values that are separated by at least periodepsilon in
# period
# make sure to get only the finite peaks in the periodogram
# this is needed because BLS may produce infs for some peaks
finitepeakind = npisfinite(lsp)
finlsp = lsp[finitepeakind]
finperiods = periods[finitepeakind]
# make sure that finlsp has finite values before we work on it
try:
bestperiodind = npargmax(finlsp)
except ValueError:
LOGERROR('no finite periodogram values '
'for this mag series, skipping...')
return {'bestperiod':npnan,
'bestlspval':npnan,
'nbestpeaks':nbestpeaks,
'nbestinds':None,
'nbestlspvals':None,
'nbestperiods':None,
'lspvals':None,
'periods':None,
'durations':None,
'method':'bls',
'blsresult':None,
'blsmodel':None,
'kwargs':{'startp':startp,
'endp':endp,
'stepsize':stepsize,
'mintransitduration':mintransitduration,
'maxtransitduration':maxtransitduration,
'ndurations':ndurations,
'blsobjective':blsobjective,
'blsmethod':blsmethod,
'blsoversample':blsoversample,
'autofreq':autofreq,
'periodepsilon':periodepsilon,
'nbestpeaks':nbestpeaks,
'sigclip':sigclip,
'magsarefluxes':magsarefluxes}}
sortedlspind = npargsort(finlsp)[::-1]
sortedlspperiods = finperiods[sortedlspind]
sortedlspvals = finlsp[sortedlspind]
# now get the nbestpeaks
nbestperiods, nbestlspvals, nbestinds, peakcount = (
[finperiods[bestperiodind]],
[finlsp[bestperiodind]],
[bestperiodind],
1
)
prevperiod = sortedlspperiods[0]
# find the best nbestpeaks in the lsp and their periods
for period, lspval, ind in zip(sortedlspperiods,
sortedlspvals,
sortedlspind):
if peakcount == nbestpeaks:
break
perioddiff = abs(period - prevperiod)
bestperiodsdiff = [abs(period - x) for x in nbestperiods]
# this ensures that this period is different from the last
# period and from all the other existing best periods by
# periodepsilon to make sure we jump to an entire different
# peak in the periodogram
if (perioddiff > (periodepsilon*prevperiod) and
all(x > (periodepsilon*period)
for x in bestperiodsdiff)):
nbestperiods.append(period)
nbestlspvals.append(lspval)
nbestinds.append(ind)
peakcount = peakcount + 1
prevperiod = period
# generate the return dict
resultdict = {
'bestperiod':finperiods[bestperiodind],
'bestlspval':finlsp[bestperiodind],
'nbestpeaks':nbestpeaks,
'nbestinds':nbestinds,
'nbestlspvals':nbestlspvals,
'nbestperiods':nbestperiods,
'lspvals':lsp,
'frequencies':frequencies,
'periods':periods,
'durations':[x['durations'] for x in results],
'blsresult':[x['blsresult'] for x in results],
'blsmodel':[x['blsmodel'] for x in results],
'stepsize':stepsize,
'nfreq':nfreq,
'mintransitduration':mintransitduration,
'maxtransitduration':maxtransitduration,
'method':'bls',
'kwargs':{'startp':startp,
'endp':endp,
'stepsize':stepsize,
'mintransitduration':mintransitduration,
'maxtransitduration':maxtransitduration,
'ndurations':ndurations,
'blsobjective':blsobjective,
'blsmethod':blsmethod,
'blsoversample':blsoversample,
'autofreq':autofreq,
'periodepsilon':periodepsilon,
'nbestpeaks':nbestpeaks,
'sigclip':sigclip,
'magsarefluxes':magsarefluxes}
}
return resultdict
else:
LOGERROR('no good detections for these times and mags, skipping...')
return {'bestperiod':npnan,
'bestlspval':npnan,
'nbestinds':None,
'nbestpeaks':nbestpeaks,
'nbestlspvals':None,
'nbestperiods':None,
'lspvals':None,
'periods':None,
'durations':None,
'blsresult':None,
'blsmodel':None,
'stepsize':stepsize,
'nfreq':None,
'nphasebins':None,
'mintransitduration':mintransitduration,
'maxtransitduration':maxtransitduration,
'method':'bls',
'kwargs':{'startp':startp,
'endp':endp,
'stepsize':stepsize,
'mintransitduration':mintransitduration,
'maxtransitduration':maxtransitduration,
'ndurations':ndurations,
'blsobjective':blsobjective,
'blsmethod':blsmethod,
'blsoversample':blsoversample,
'autofreq':autofreq,
'periodepsilon':periodepsilon,
'nbestpeaks':nbestpeaks,
'sigclip':sigclip,
'magsarefluxes':magsarefluxes}}
|
def _parse_csv_header(header):
'''This parses a CSV header from a K2 CSV LC.
Returns a dict that can be used to update an existing lcdict with the
relevant metadata info needed to form a full LC.
'''
# first, break into lines
headerlines = header.split('\n')
headerlines = [x.lstrip('# ') for x in headerlines]
# next, find the indices of the '# COLUMNS' line and '# LIGHTCURVE' line
metadatastart = headerlines.index('METADATA')
columnstart = headerlines.index('COLUMNS')
lcstart = headerlines.index('LIGHTCURVE')
# get the lines for the metadata and columndefs
metadata = headerlines[metadatastart+1:columnstart-1]
columndefs = headerlines[columnstart+1:lcstart-1]
# parse the metadata
metainfo = [x.split(',') for x in metadata][:-1]
aperpixradius = metadata[-1]
objectid, kepid, ucac4id, kepmag = metainfo[0]
objectid, kepid, ucac4id, kepmag = (objectid.split(' = ')[-1],
kepid.split(' = ')[-1],
ucac4id.split(' = ')[-1],
kepmag.split(' = ')[-1])
kepmag = float(kepmag) if kepmag else None
ra, decl, ndet, k2campaign = metainfo[1]
ra, decl, ndet, k2campaign = (ra.split(' = ')[-1],
decl.split(' = ')[-1],
int(ndet.split(' = ')[-1]),
int(k2campaign.split(' = ')[-1]))
fovccd, fovchannel, fovmodule = metainfo[2]
fovccd, fovchannel, fovmodule = (int(fovccd.split(' = ')[-1]),
int(fovchannel.split(' = ')[-1]),
int(fovmodule.split(' = ')[-1]))
try:
qualflag, bjdoffset, napertures = metainfo[3]
qualflag, bjdoffset, napertures = (int(qualflag.split(' = ')[-1]),
float(bjdoffset.split(' = ')[-1]),
int(napertures.split(' = ')[-1]))
kernelspec = None
except Exception as e:
qualflag, bjdoffset, napertures, kernelspec = metainfo[3]
qualflag, bjdoffset, napertures, kernelspec = (
int(qualflag.split(' = ')[-1]),
float(bjdoffset.split(' = ')[-1]),
int(napertures.split(' = ')[-1]),
str(kernelspec.split(' = ')[-1])
)
aperpixradius = aperpixradius.split(' = ')[-1].split(',')
aperpixradius = [float(x) for x in aperpixradius]
# parse the columndefs
columns = [x.split(' - ')[1] for x in columndefs]
metadict = {'objectid':objectid,
'objectinfo':{
'objectid':objectid,
'kepid':kepid,
'ucac4id':ucac4id,
'kepmag':kepmag,
'ra':ra,
'decl':decl,
'ndet':ndet,
'k2campaign':k2campaign,
'fovccd':fovccd,
'fovchannel':fovchannel,
'fovmodule':fovmodule,
'qualflag':qualflag,
'bjdoffset':bjdoffset,
'napertures':napertures,
'kernelspec':kernelspec,
'aperpixradius':aperpixradius,
},
'columns':columns}
return metadict
|
def read_csv_lightcurve(lcfile):
'''
This reads in a K2 lightcurve in CSV format. Transparently reads gzipped
files.
Parameters
----------
lcfile : str
The light curve file to read.
Returns
-------
dict
Returns an lcdict.
'''
# read in the file first
if '.gz' in os.path.basename(lcfile):
LOGINFO('reading gzipped K2 LC: %s' % lcfile)
infd = gzip.open(lcfile,'rb')
else:
LOGINFO('reading K2 LC: %s' % lcfile)
infd = open(lcfile,'rb')
lctext = infd.read().decode()
infd.close()
# figure out the header and get the LC columns
lcstart = lctext.index('# LIGHTCURVE\n')
lcheader = lctext[:lcstart+12]
lccolumns = lctext[lcstart+13:].split('\n')
lccolumns = [x.split(',') for x in lccolumns if len(x) > 0]
# initialize the lcdict and parse the CSV header
lcdict = _parse_csv_header(lcheader)
# tranpose the LC rows into columns
lccolumns = list(zip(*lccolumns))
# write the columns to the dict
for colind, col in enumerate(lcdict['columns']):
# this picks out the caster to use when reading each column using the
# definitions in the lcutils.COLUMNDEFS dictionary
lcdict[col.lower()] = np.array([COLUMNDEFS[col][2](x)
for x in lccolumns[colind]])
lcdict['columns'] = [x.lower() for x in lcdict['columns']]
return lcdict
|
def get_starfeatures(lcfile,
outdir,
kdtree,
objlist,
lcflist,
neighbor_radius_arcsec,
deredden=True,
custom_bandpasses=None,
lcformat='hat-sql',
lcformatdir=None):
'''This runs the functions from :py:func:`astrobase.varclass.starfeatures`
on a single light curve file.
Parameters
----------
lcfile : str
This is the LC file to extract star features for.
outdir : str
This is the directory to write the output pickle to.
kdtree: scipy.spatial.cKDTree
This is a `scipy.spatial.KDTree` or `cKDTree` used to calculate neighbor
proximity features. This is for the light curve catalog this object is
in.
objlist : np.array
This is a Numpy array of object IDs in the same order as the
`kdtree.data` np.array. This is for the light curve catalog this object
is in.
lcflist : np.array
This is a Numpy array of light curve filenames in the same order as
`kdtree.data`. This is for the light curve catalog this object is in.
neighbor_radius_arcsec : float
This indicates the radius in arcsec to search for neighbors for this
object using the light curve catalog's `kdtree`, `objlist`, `lcflist`,
and in GAIA.
deredden : bool
This controls if the colors and any color classifications will be
dereddened using 2MASS DUST.
custom_bandpasses : dict or None
This is a dict used to define any custom bandpasses in the
`in_objectinfo` dict you want to make this function aware of and
generate colors for. Use the format below for this dict::
{
'<bandpass_key_1>':{'dustkey':'<twomass_dust_key_1>',
'label':'<band_label_1>'
'colors':[['<bandkey1>-<bandkey2>',
'<BAND1> - <BAND2>'],
['<bandkey3>-<bandkey4>',
'<BAND3> - <BAND4>']]},
.
...
.
'<bandpass_key_N>':{'dustkey':'<twomass_dust_key_N>',
'label':'<band_label_N>'
'colors':[['<bandkey1>-<bandkey2>',
'<BAND1> - <BAND2>'],
['<bandkey3>-<bandkey4>',
'<BAND3> - <BAND4>']]},
}
Where:
`bandpass_key` is a key to use to refer to this bandpass in the
`objectinfo` dict, e.g. 'sdssg' for SDSS g band
`twomass_dust_key` is the key to use in the 2MASS DUST result table for
reddening per band-pass. For example, given the following DUST result
table (using http://irsa.ipac.caltech.edu/applications/DUST/)::
|Filter_name|LamEff |A_over_E_B_V_SandF|A_SandF|A_over_E_B_V_SFD|A_SFD|
|char |float |float |float |float |float|
| |microns| |mags | |mags |
CTIO U 0.3734 4.107 0.209 4.968 0.253
CTIO B 0.4309 3.641 0.186 4.325 0.221
CTIO V 0.5517 2.682 0.137 3.240 0.165
.
.
...
The `twomass_dust_key` for 'vmag' would be 'CTIO V'. If you want to
skip DUST lookup and want to pass in a specific reddening magnitude
for your bandpass, use a float for the value of
`twomass_dust_key`. If you want to skip DUST lookup entirely for
this bandpass, use None for the value of `twomass_dust_key`.
`band_label` is the label to use for this bandpass, e.g. 'W1' for
WISE-1 band, 'u' for SDSS u, etc.
The 'colors' list contains color definitions for all colors you want
to generate using this bandpass. this list contains elements of the
form::
['<bandkey1>-<bandkey2>','<BAND1> - <BAND2>']
where the the first item is the bandpass keys making up this color,
and the second item is the label for this color to be used by the
frontends. An example::
['sdssu-sdssg','u - g']
lcformat : str
This is the `formatkey` associated with your light curve format, which
you previously passed in to the `lcproc.register_lcformat`
function. This will be used to look up how to find and read the light
curves specified in `basedir` or `use_list_of_filenames`.
lcformatdir : str or None
If this is provided, gives the path to a directory when you've stored
your lcformat description JSONs, other than the usual directories lcproc
knows to search for them in. Use this along with `lcformat` to specify
an LC format JSON file that's not currently registered with lcproc.
Returns
-------
str
Path to the output pickle containing all of the star features for this
object.
'''
try:
formatinfo = get_lcformat(lcformat,
use_lcformat_dir=lcformatdir)
if formatinfo:
(dfileglob, readerfunc,
dtimecols, dmagcols, derrcols,
magsarefluxes, normfunc) = formatinfo
else:
LOGERROR("can't figure out the light curve format")
return None
except Exception as e:
LOGEXCEPTION("can't figure out the light curve format")
return None
try:
# get the LC into a dict
lcdict = readerfunc(lcfile)
# this should handle lists/tuples being returned by readerfunc
# we assume that the first element is the actual lcdict
# FIXME: figure out how to not need this assumption
if ( (isinstance(lcdict, (list, tuple))) and
(isinstance(lcdict[0], dict)) ):
lcdict = lcdict[0]
resultdict = {'objectid':lcdict['objectid'],
'info':lcdict['objectinfo'],
'lcfbasename':os.path.basename(lcfile)}
# run the coord features first
coordfeat = starfeatures.coord_features(lcdict['objectinfo'])
# next, run the color features
colorfeat = starfeatures.color_features(
lcdict['objectinfo'],
deredden=deredden,
custom_bandpasses=custom_bandpasses
)
# run a rough color classification
colorclass = starfeatures.color_classification(colorfeat,
coordfeat)
# finally, run the neighbor features
nbrfeat = starfeatures.neighbor_gaia_features(lcdict['objectinfo'],
kdtree,
neighbor_radius_arcsec)
# get the objectids of the neighbors found if any
if nbrfeat['nbrindices'].size > 0:
nbrfeat['nbrobjectids'] = objlist[nbrfeat['nbrindices']]
nbrfeat['closestnbrobjectid'] = objlist[
nbrfeat['closestdistnbrind']
]
nbrfeat['closestnbrlcfname'] = lcflist[
nbrfeat['closestdistnbrind']
]
else:
nbrfeat['nbrobjectids'] = np.array([])
nbrfeat['closestnbrobjectid'] = np.array([])
nbrfeat['closestnbrlcfname'] = np.array([])
# update the result dict
resultdict.update(coordfeat)
resultdict.update(colorfeat)
resultdict.update(colorclass)
resultdict.update(nbrfeat)
outfile = os.path.join(outdir,
'starfeatures-%s.pkl' %
squeeze(resultdict['objectid']).replace(' ','-'))
with open(outfile, 'wb') as outfd:
pickle.dump(resultdict, outfd, protocol=4)
return outfile
except Exception as e:
LOGEXCEPTION('failed to get star features for %s because: %s' %
(os.path.basename(lcfile), e))
return None
|
def _starfeatures_worker(task):
'''
This wraps starfeatures.
'''
try:
(lcfile, outdir, kdtree, objlist,
lcflist, neighbor_radius_arcsec,
deredden, custom_bandpasses, lcformat, lcformatdir) = task
return get_starfeatures(lcfile, outdir,
kdtree, objlist, lcflist,
neighbor_radius_arcsec,
deredden=deredden,
custom_bandpasses=custom_bandpasses,
lcformat=lcformat,
lcformatdir=lcformatdir)
except Exception as e:
return None
|
def serial_starfeatures(lclist,
outdir,
lc_catalog_pickle,
neighbor_radius_arcsec,
maxobjects=None,
deredden=True,
custom_bandpasses=None,
lcformat='hat-sql',
lcformatdir=None):
'''This drives the `get_starfeatures` function for a collection of LCs.
Parameters
----------
lclist : list of str
The list of light curve file names to process.
outdir : str
The output directory where the results will be placed.
lc_catalog_pickle : str
The path to a catalog containing at a dict with least:
- an object ID array accessible with `dict['objects']['objectid']`
- an LC filename array accessible with `dict['objects']['lcfname']`
- a `scipy.spatial.KDTree` or `cKDTree` object to use for finding
neighbors for each object accessible with `dict['kdtree']`
A catalog pickle of the form needed can be produced using
:py:func:`astrobase.lcproc.catalogs.make_lclist` or
:py:func:`astrobase.lcproc.catalogs.filter_lclist`.
neighbor_radius_arcsec : float
This indicates the radius in arcsec to search for neighbors for this
object using the light curve catalog's `kdtree`, `objlist`, `lcflist`,
and in GAIA.
maxobjects : int
The number of objects to process from `lclist`.
deredden : bool
This controls if the colors and any color classifications will be
dereddened using 2MASS DUST.
custom_bandpasses : dict or None
This is a dict used to define any custom bandpasses in the
`in_objectinfo` dict you want to make this function aware of and
generate colors for. Use the format below for this dict::
{
'<bandpass_key_1>':{'dustkey':'<twomass_dust_key_1>',
'label':'<band_label_1>'
'colors':[['<bandkey1>-<bandkey2>',
'<BAND1> - <BAND2>'],
['<bandkey3>-<bandkey4>',
'<BAND3> - <BAND4>']]},
.
...
.
'<bandpass_key_N>':{'dustkey':'<twomass_dust_key_N>',
'label':'<band_label_N>'
'colors':[['<bandkey1>-<bandkey2>',
'<BAND1> - <BAND2>'],
['<bandkey3>-<bandkey4>',
'<BAND3> - <BAND4>']]},
}
Where:
`bandpass_key` is a key to use to refer to this bandpass in the
`objectinfo` dict, e.g. 'sdssg' for SDSS g band
`twomass_dust_key` is the key to use in the 2MASS DUST result table for
reddening per band-pass. For example, given the following DUST result
table (using http://irsa.ipac.caltech.edu/applications/DUST/)::
|Filter_name|LamEff |A_over_E_B_V_SandF|A_SandF|A_over_E_B_V_SFD|A_SFD|
|char |float |float |float |float |float|
| |microns| |mags | |mags |
CTIO U 0.3734 4.107 0.209 4.968 0.253
CTIO B 0.4309 3.641 0.186 4.325 0.221
CTIO V 0.5517 2.682 0.137 3.240 0.165
.
.
...
The `twomass_dust_key` for 'vmag' would be 'CTIO V'. If you want to
skip DUST lookup and want to pass in a specific reddening magnitude
for your bandpass, use a float for the value of
`twomass_dust_key`. If you want to skip DUST lookup entirely for
this bandpass, use None for the value of `twomass_dust_key`.
`band_label` is the label to use for this bandpass, e.g. 'W1' for
WISE-1 band, 'u' for SDSS u, etc.
The 'colors' list contains color definitions for all colors you want
to generate using this bandpass. this list contains elements of the
form::
['<bandkey1>-<bandkey2>','<BAND1> - <BAND2>']
where the the first item is the bandpass keys making up this color,
and the second item is the label for this color to be used by the
frontends. An example::
['sdssu-sdssg','u - g']
lcformat : str
This is the `formatkey` associated with your light curve format, which
you previously passed in to the `lcproc.register_lcformat`
function. This will be used to look up how to find and read the light
curves specified in `basedir` or `use_list_of_filenames`.
lcformatdir : str or None
If this is provided, gives the path to a directory when you've stored
your lcformat description JSONs, other than the usual directories lcproc
knows to search for them in. Use this along with `lcformat` to specify
an LC format JSON file that's not currently registered with lcproc.
Returns
-------
list of str
A list of all star features pickles produced.
'''
# make sure to make the output directory if it doesn't exist
if not os.path.exists(outdir):
os.makedirs(outdir)
if maxobjects:
lclist = lclist[:maxobjects]
# read in the kdtree pickle
with open(lc_catalog_pickle, 'rb') as infd:
kdt_dict = pickle.load(infd)
kdt = kdt_dict['kdtree']
objlist = kdt_dict['objects']['objectid']
objlcfl = kdt_dict['objects']['lcfname']
tasks = [(x, outdir, kdt, objlist, objlcfl,
neighbor_radius_arcsec,
deredden, custom_bandpasses,
lcformat, lcformatdir) for x in lclist]
for task in tqdm(tasks):
result = _starfeatures_worker(task)
return result
|
def parallel_starfeatures(lclist,
outdir,
lc_catalog_pickle,
neighbor_radius_arcsec,
maxobjects=None,
deredden=True,
custom_bandpasses=None,
lcformat='hat-sql',
lcformatdir=None,
nworkers=NCPUS):
'''This runs `get_starfeatures` in parallel for all light curves in `lclist`.
Parameters
----------
lclist : list of str
The list of light curve file names to process.
outdir : str
The output directory where the results will be placed.
lc_catalog_pickle : str
The path to a catalog containing at a dict with least:
- an object ID array accessible with `dict['objects']['objectid']`
- an LC filename array accessible with `dict['objects']['lcfname']`
- a `scipy.spatial.KDTree` or `cKDTree` object to use for finding
neighbors for each object accessible with `dict['kdtree']`
A catalog pickle of the form needed can be produced using
:py:func:`astrobase.lcproc.catalogs.make_lclist` or
:py:func:`astrobase.lcproc.catalogs.filter_lclist`.
neighbor_radius_arcsec : float
This indicates the radius in arcsec to search for neighbors for this
object using the light curve catalog's `kdtree`, `objlist`, `lcflist`,
and in GAIA.
maxobjects : int
The number of objects to process from `lclist`.
deredden : bool
This controls if the colors and any color classifications will be
dereddened using 2MASS DUST.
custom_bandpasses : dict or None
This is a dict used to define any custom bandpasses in the
`in_objectinfo` dict you want to make this function aware of and
generate colors for. Use the format below for this dict::
{
'<bandpass_key_1>':{'dustkey':'<twomass_dust_key_1>',
'label':'<band_label_1>'
'colors':[['<bandkey1>-<bandkey2>',
'<BAND1> - <BAND2>'],
['<bandkey3>-<bandkey4>',
'<BAND3> - <BAND4>']]},
.
...
.
'<bandpass_key_N>':{'dustkey':'<twomass_dust_key_N>',
'label':'<band_label_N>'
'colors':[['<bandkey1>-<bandkey2>',
'<BAND1> - <BAND2>'],
['<bandkey3>-<bandkey4>',
'<BAND3> - <BAND4>']]},
}
Where:
`bandpass_key` is a key to use to refer to this bandpass in the
`objectinfo` dict, e.g. 'sdssg' for SDSS g band
`twomass_dust_key` is the key to use in the 2MASS DUST result table for
reddening per band-pass. For example, given the following DUST result
table (using http://irsa.ipac.caltech.edu/applications/DUST/)::
|Filter_name|LamEff |A_over_E_B_V_SandF|A_SandF|A_over_E_B_V_SFD|A_SFD|
|char |float |float |float |float |float|
| |microns| |mags | |mags |
CTIO U 0.3734 4.107 0.209 4.968 0.253
CTIO B 0.4309 3.641 0.186 4.325 0.221
CTIO V 0.5517 2.682 0.137 3.240 0.165
.
.
...
The `twomass_dust_key` for 'vmag' would be 'CTIO V'. If you want to
skip DUST lookup and want to pass in a specific reddening magnitude
for your bandpass, use a float for the value of
`twomass_dust_key`. If you want to skip DUST lookup entirely for
this bandpass, use None for the value of `twomass_dust_key`.
`band_label` is the label to use for this bandpass, e.g. 'W1' for
WISE-1 band, 'u' for SDSS u, etc.
The 'colors' list contains color definitions for all colors you want
to generate using this bandpass. this list contains elements of the
form::
['<bandkey1>-<bandkey2>','<BAND1> - <BAND2>']
where the the first item is the bandpass keys making up this color,
and the second item is the label for this color to be used by the
frontends. An example::
['sdssu-sdssg','u - g']
lcformat : str
This is the `formatkey` associated with your light curve format, which
you previously passed in to the `lcproc.register_lcformat`
function. This will be used to look up how to find and read the light
curves specified in `basedir` or `use_list_of_filenames`.
lcformatdir : str or None
If this is provided, gives the path to a directory when you've stored
your lcformat description JSONs, other than the usual directories lcproc
knows to search for them in. Use this along with `lcformat` to specify
an LC format JSON file that's not currently registered with lcproc.
nworkers : int
The number of parallel workers to launch.
Returns
-------
dict
A dict with key:val pairs of the input light curve filename and the
output star features pickle for each LC processed.
'''
try:
formatinfo = get_lcformat(lcformat,
use_lcformat_dir=lcformatdir)
if formatinfo:
(dfileglob, readerfunc,
dtimecols, dmagcols, derrcols,
magsarefluxes, normfunc) = formatinfo
else:
LOGERROR("can't figure out the light curve format")
return None
except Exception as e:
LOGEXCEPTION("can't figure out the light curve format")
return None
# make sure to make the output directory if it doesn't exist
if not os.path.exists(outdir):
os.makedirs(outdir)
if maxobjects:
lclist = lclist[:maxobjects]
# read in the kdtree pickle
with open(lc_catalog_pickle, 'rb') as infd:
kdt_dict = pickle.load(infd)
kdt = kdt_dict['kdtree']
objlist = kdt_dict['objects']['objectid']
objlcfl = kdt_dict['objects']['lcfname']
tasks = [(x, outdir, kdt, objlist, objlcfl,
neighbor_radius_arcsec,
deredden, custom_bandpasses, lcformat) for x in lclist]
with ProcessPoolExecutor(max_workers=nworkers) as executor:
resultfutures = executor.map(_starfeatures_worker, tasks)
results = [x for x in resultfutures]
resdict = {os.path.basename(x):y for (x,y) in zip(lclist, results)}
return resdict
|
def parallel_starfeatures_lcdir(lcdir,
outdir,
lc_catalog_pickle,
neighbor_radius_arcsec,
fileglob=None,
maxobjects=None,
deredden=True,
custom_bandpasses=None,
lcformat='hat-sql',
lcformatdir=None,
nworkers=NCPUS,
recursive=True):
'''This runs parallel star feature extraction for a directory of LCs.
Parameters
----------
lcdir : list of str
The directory to search for light curves.
outdir : str
The output directory where the results will be placed.
lc_catalog_pickle : str
The path to a catalog containing at a dict with least:
- an object ID array accessible with `dict['objects']['objectid']`
- an LC filename array accessible with `dict['objects']['lcfname']`
- a `scipy.spatial.KDTree` or `cKDTree` object to use for finding
neighbors for each object accessible with `dict['kdtree']`
A catalog pickle of the form needed can be produced using
:py:func:`astrobase.lcproc.catalogs.make_lclist` or
:py:func:`astrobase.lcproc.catalogs.filter_lclist`.
neighbor_radius_arcsec : float
This indicates the radius in arcsec to search for neighbors for this
object using the light curve catalog's `kdtree`, `objlist`, `lcflist`,
and in GAIA.
fileglob : str
The UNIX file glob to use to search for the light curves in `lcdir`. If
None, the default value for the light curve format specified will be
used.
maxobjects : int
The number of objects to process from `lclist`.
deredden : bool
This controls if the colors and any color classifications will be
dereddened using 2MASS DUST.
custom_bandpasses : dict or None
This is a dict used to define any custom bandpasses in the
`in_objectinfo` dict you want to make this function aware of and
generate colors for. Use the format below for this dict::
{
'<bandpass_key_1>':{'dustkey':'<twomass_dust_key_1>',
'label':'<band_label_1>'
'colors':[['<bandkey1>-<bandkey2>',
'<BAND1> - <BAND2>'],
['<bandkey3>-<bandkey4>',
'<BAND3> - <BAND4>']]},
.
...
.
'<bandpass_key_N>':{'dustkey':'<twomass_dust_key_N>',
'label':'<band_label_N>'
'colors':[['<bandkey1>-<bandkey2>',
'<BAND1> - <BAND2>'],
['<bandkey3>-<bandkey4>',
'<BAND3> - <BAND4>']]},
}
Where:
`bandpass_key` is a key to use to refer to this bandpass in the
`objectinfo` dict, e.g. 'sdssg' for SDSS g band
`twomass_dust_key` is the key to use in the 2MASS DUST result table for
reddening per band-pass. For example, given the following DUST result
table (using http://irsa.ipac.caltech.edu/applications/DUST/)::
|Filter_name|LamEff |A_over_E_B_V_SandF|A_SandF|A_over_E_B_V_SFD|A_SFD|
|char |float |float |float |float |float|
| |microns| |mags | |mags |
CTIO U 0.3734 4.107 0.209 4.968 0.253
CTIO B 0.4309 3.641 0.186 4.325 0.221
CTIO V 0.5517 2.682 0.137 3.240 0.165
.
.
...
The `twomass_dust_key` for 'vmag' would be 'CTIO V'. If you want to
skip DUST lookup and want to pass in a specific reddening magnitude
for your bandpass, use a float for the value of
`twomass_dust_key`. If you want to skip DUST lookup entirely for
this bandpass, use None for the value of `twomass_dust_key`.
`band_label` is the label to use for this bandpass, e.g. 'W1' for
WISE-1 band, 'u' for SDSS u, etc.
The 'colors' list contains color definitions for all colors you want
to generate using this bandpass. this list contains elements of the
form::
['<bandkey1>-<bandkey2>','<BAND1> - <BAND2>']
where the the first item is the bandpass keys making up this color,
and the second item is the label for this color to be used by the
frontends. An example::
['sdssu-sdssg','u - g']
lcformat : str
This is the `formatkey` associated with your light curve format, which
you previously passed in to the `lcproc.register_lcformat`
function. This will be used to look up how to find and read the light
curves specified in `basedir` or `use_list_of_filenames`.
lcformatdir : str or None
If this is provided, gives the path to a directory when you've stored
your lcformat description JSONs, other than the usual directories lcproc
knows to search for them in. Use this along with `lcformat` to specify
an LC format JSON file that's not currently registered with lcproc.
nworkers : int
The number of parallel workers to launch.
Returns
-------
dict
A dict with key:val pairs of the input light curve filename and the
output star features pickle for each LC processed.
'''
try:
formatinfo = get_lcformat(lcformat,
use_lcformat_dir=lcformatdir)
if formatinfo:
(dfileglob, readerfunc,
dtimecols, dmagcols, derrcols,
magsarefluxes, normfunc) = formatinfo
else:
LOGERROR("can't figure out the light curve format")
return None
except Exception as e:
LOGEXCEPTION("can't figure out the light curve format")
return None
if not fileglob:
fileglob = dfileglob
# now find the files
LOGINFO('searching for %s light curves in %s ...' % (lcformat, lcdir))
if recursive is False:
matching = glob.glob(os.path.join(lcdir, fileglob))
else:
# use recursive glob for Python 3.5+
if sys.version_info[:2] > (3,4):
matching = glob.glob(os.path.join(lcdir,
'**',
fileglob),recursive=True)
# otherwise, use os.walk and glob
else:
# use os.walk to go through the directories
walker = os.walk(lcdir)
matching = []
for root, dirs, _files in walker:
for sdir in dirs:
searchpath = os.path.join(root,
sdir,
fileglob)
foundfiles = glob.glob(searchpath)
if foundfiles:
matching.extend(foundfiles)
# now that we have all the files, process them
if matching and len(matching) > 0:
LOGINFO('found %s light curves, getting starfeatures...' %
len(matching))
return parallel_starfeatures(matching,
outdir,
lc_catalog_pickle,
neighbor_radius_arcsec,
deredden=deredden,
custom_bandpasses=custom_bandpasses,
maxobjects=maxobjects,
lcformat=lcformat,
lcformatdir=lcformatdir,
nworkers=nworkers)
else:
LOGERROR('no light curve files in %s format found in %s' % (lcformat,
lcdir))
return None
|
def checkplot_dict(
lspinfolist,
times,
mags,
errs,
fast_mode=False,
magsarefluxes=False,
nperiodstouse=3,
objectinfo=None,
deredden_object=True,
custom_bandpasses=None,
gaia_submit_timeout=10.0,
gaia_submit_tries=3,
gaia_max_timeout=180.0,
gaia_mirror=None,
complete_query_later=True,
varinfo=None,
getvarfeatures=True,
lclistpkl=None,
nbrradiusarcsec=60.0,
maxnumneighbors=5,
xmatchinfo=None,
xmatchradiusarcsec=3.0,
lcfitfunc=None,
lcfitparams=None,
externalplots=None,
findercmap='gray_r',
finderconvolve=None,
findercachedir='~/.astrobase/stamp-cache',
normto='globalmedian',
normmingap=4.0,
sigclip=4.0,
varepoch='min',
phasewrap=True,
phasesort=True,
phasebin=0.002,
minbinelems=7,
plotxlim=(-0.8,0.8),
xliminsetmode=False,
plotdpi=100,
bestperiodhighlight=None,
xgridlines=None,
mindet=99,
verbose=True
):
'''This writes a multiple lspinfo checkplot to a dict.
This function can take input from multiple lspinfo dicts (e.g. a list of
output dicts or gzipped pickles of dicts from independent runs of BLS, PDM,
AoV, or GLS period-finders in periodbase).
NOTE: if `lspinfolist` contains more than one lspinfo object with the same
lspmethod ('pdm','gls','sls','aov','bls'), the latest one in the list will
overwrite the earlier ones.
The output dict contains all the plots (magseries and phased magseries),
periodograms, object information, variability information, light curves, and
phased light curves. This can be written to:
- a pickle with `checkplot_pickle` below
- a PNG with `checkplot.pkl_png.checkplot_pickle_to_png`
Parameters
----------
lspinfolist : list of dicts
This is a list of dicts containing period-finder results ('lspinfo'
dicts). These can be from any of the period-finder methods in
astrobase.periodbase. To incorporate external period-finder results into
checkplots, these dicts must be of the form below, including at least
the keys indicated here::
{'periods': np.array of all periods searched by the period-finder,
'lspvals': np.array of periodogram power value for each period,
'bestperiod': a float value that is the period with the highest
peak in the periodogram, i.e. the most-likely actual
period,
'method': a three-letter code naming the period-finder used; must
be one of the keys in the
`astrobase.periodbase.METHODLABELS` dict,
'nbestperiods': a list of the periods corresponding to periodogram
peaks (`nbestlspvals` below) to annotate on the
periodogram plot so they can be called out
visually,
'nbestlspvals': a list of the power values associated with
periodogram peaks to annotate on the periodogram
plot so they can be called out visually; should be
the same length as `nbestperiods` above}
`nbestperiods` and `nbestlspvals` in each lspinfo dict must have at
least as many elements as the `nperiodstouse` kwarg to this function.
times,mags,errs : np.arrays
The magnitude/flux time-series to process for this checkplot along with
their associated measurement errors.
fast_mode : bool or float
This runs the external catalog operations in a "fast" mode, with short
timeouts and not trying to hit external catalogs that take a long time
to respond.
If this is set to True, the default settings for the external requests
will then become::
skyview_lookup = False
skyview_timeout = 10.0
skyview_retry_failed = False
dust_timeout = 10.0
gaia_submit_timeout = 7.0
gaia_max_timeout = 10.0
gaia_submit_tries = 2
complete_query_later = False
search_simbad = False
If this is a float, will run in "fast" mode with the provided timeout
value in seconds and the following settings::
skyview_lookup = True
skyview_timeout = fast_mode
skyview_retry_failed = False
dust_timeout = fast_mode
gaia_submit_timeout = 0.66*fast_mode
gaia_max_timeout = fast_mode
gaia_submit_tries = 2
complete_query_later = False
search_simbad = False
magsarefluxes : bool
If True, indicates the input time-series is fluxes and not mags so the
plot y-axis direction and range can be set appropriately.
nperiodstouse : int
This controls how many 'best' periods to make phased LC plots for. By
default, this is the 3 best. If this is set to None, all 'best' periods
present in each lspinfo dict's 'nbestperiods' key will be processed for
this checkplot.
objectinfo : dict or None
This is a dict containing information on the object whose light
curve is being processed. This function will then be able to
look up and download a finder chart for this object and write
that to the output checkplotdict. External services such as
GAIA, SIMBAD, TIC, etc. will also be used to look up this object
by its coordinates, and will add in information available from
those services.
This dict must be of the form and contain at least the keys described
below::
{'objectid': the name of the object,
'ra': the right ascension of the object in decimal degrees,
'decl': the declination of the object in decimal degrees,
'ndet': the number of observations of this object}
You can also provide magnitudes and proper motions of the object using
the following keys and the appropriate values in the `objectinfo`
dict. These will be used to calculate colors, total and reduced proper
motion, etc. and display these in the output checkplot PNG::
'pmra' -> the proper motion in mas/yr in right ascension,
'pmdecl' -> the proper motion in mas/yr in declination,
'umag' -> U mag -> colors: U-B, U-V, U-g
'bmag' -> B mag -> colors: U-B, B-V
'vmag' -> V mag -> colors: U-V, B-V, V-R, V-I, V-K
'rmag' -> R mag -> colors: V-R, R-I
'imag' -> I mag -> colors: g-I, V-I, R-I, B-I
'jmag' -> 2MASS J mag -> colors: J-H, J-K, g-J, i-J
'hmag' -> 2MASS H mag -> colors: J-H, H-K
'kmag' -> 2MASS Ks mag -> colors: g-Ks, H-Ks, J-Ks, V-Ks
'sdssu' -> SDSS u mag -> colors: u-g, u-V
'sdssg' -> SDSS g mag -> colors: g-r, g-i, g-K, u-g, U-g, g-J
'sdssr' -> SDSS r mag -> colors: r-i, g-r
'sdssi' -> SDSS i mag -> colors: r-i, i-z, g-i, i-J, i-W1
'sdssz' -> SDSS z mag -> colors: i-z, z-W2, g-z
'ujmag' -> UKIRT J mag -> colors: J-H, H-K, J-K, g-J, i-J
'uhmag' -> UKIRT H mag -> colors: J-H, H-K
'ukmag' -> UKIRT K mag -> colors: g-K, H-K, J-K, V-K
'irac1' -> Spitzer IRAC1 mag -> colors: i-I1, I1-I2
'irac2' -> Spitzer IRAC2 mag -> colors: I1-I2, I2-I3
'irac3' -> Spitzer IRAC3 mag -> colors: I2-I3
'irac4' -> Spitzer IRAC4 mag -> colors: I3-I4
'wise1' -> WISE W1 mag -> colors: i-W1, W1-W2
'wise2' -> WISE W2 mag -> colors: W1-W2, W2-W3
'wise3' -> WISE W3 mag -> colors: W2-W3
'wise4' -> WISE W4 mag -> colors: W3-W4
If you have magnitude measurements in other bands, use the
`custom_bandpasses` kwarg to pass these in.
If this is None, no object information will be incorporated into the
checkplot (kind of making it effectively useless for anything other than
glancing at the phased light curves at various 'best' periods from the
period-finder results).
deredden_object : bool
If this is True, will use the 2MASS DUST service to get extinction
coefficients in various bands, and then try to deredden the magnitudes
and colors of the object already present in the checkplot's objectinfo
dict.
custom_bandpasses : dict
This is a dict used to provide custom bandpass definitions for any
magnitude measurements in the objectinfo dict that are not automatically
recognized by :py:func:`astrobase.varclass.starfeatures.color_features`.
gaia_submit_timeout : float
Sets the timeout in seconds to use when submitting a request to look up
the object's information to the GAIA service. Note that if `fast_mode`
is set, this is ignored.
gaia_submit_tries : int
Sets the maximum number of times the GAIA services will be contacted to
obtain this object's information. If `fast_mode` is set, this is
ignored, and the services will be contacted only once (meaning that a
failure to respond will be silently ignored and no GAIA data will be
added to the checkplot's objectinfo dict).
gaia_max_timeout : float
Sets the timeout in seconds to use when waiting for the GAIA service to
respond to our request for the object's information. Note that if
`fast_mode` is set, this is ignored.
gaia_mirror : str or None
This sets the GAIA mirror to use. This is a key in the
`services.gaia.GAIA_URLS` dict which defines the URLs to hit for each
mirror.
complete_query_later : bool
If this is True, saves the state of GAIA queries that are not yet
complete when `gaia_max_timeout` is reached while waiting for the GAIA
service to respond to our request. A later call for GAIA info on the
same object will attempt to pick up the results from the existing query
if it's completed. If `fast_mode` is True, this is ignored.
varinfo : dict
If this is None, a blank dict of the form below will be added to the
checkplotdict::
{'objectisvar': None -> variability flag (None indicates unset),
'vartags': CSV str containing variability type tags from review,
'varisperiodic': None -> periodic variability flag (None -> unset),
'varperiod': the period associated with the periodic variability,
'varepoch': the epoch associated with the periodic variability}
If you provide a dict matching this format in this kwarg, this will be
passed unchanged to the output checkplotdict produced.
getvarfeatures : bool
If this is True, several light curve variability features for this
object will be calculated and added to the output checkpotdict as
checkplotdict['varinfo']['features']. This uses the function
`varclass.varfeatures.all_nonperiodic_features` so see its docstring for
the measures that are calculated (e.g. Stetson J indices, dispersion
measures, etc.)
lclistpkl : dict or str
If this is provided, must be a dict resulting from reading a catalog
produced by the `lcproc.catalogs.make_lclist` function or a str path
pointing to the pickle file produced by that function. This catalog is
used to find neighbors of the current object in the current light curve
collection. Looking at neighbors of the object within the radius
specified by `nbrradiusarcsec` is useful for light curves produced by
instruments that have a large pixel scale, so are susceptible to
blending of variability and potential confusion of neighbor variability
with that of the actual object being looked at. If this is None, no
neighbor lookups will be performed.
nbrradiusarcsec : flaot
The radius in arcseconds to use for a search conducted around the
coordinates of this object to look for any potential confusion and
blending of variability amplitude caused by their proximity.
maxnumneighbors : int
The maximum number of neighbors that will have their light curves and
magnitudes noted in this checkplot as potential blends with the target
object.
xmatchinfo : str or dict
This is either the xmatch dict produced by the function
`load_xmatch_external_catalogs` above, or the path to the xmatch info
pickle file produced by that function.
xmatchradiusarcsec : float
This is the cross-matching radius to use in arcseconds.
lcfitfunc : Python function or None
If provided, this should be a Python function that is used to fit a
model to the light curve. This fit is then overplotted for each phased
light curve in the checkplot. This function should have the following
signature:
`def lcfitfunc(times, mags, errs, period, **lcfitparams)`
where `lcfitparams` encapsulates all external parameters (i.e. number of
knots for a spline function, the degree of a Legendre polynomial fit,
etc., planet transit parameters) This function should return a Python
dict with the following structure (similar to the functions in
`astrobase.lcfit`) and at least the keys below::
{'fittype':<str: name of fit method>,
'fitchisq':<float: the chi-squared value of the fit>,
'fitredchisq':<float: the reduced chi-squared value of the fit>,
'fitinfo':{'fitmags':<ndarray: model mags/fluxes from fit func>},
'magseries':{'times':<ndarray: times where fitmags are evaluated>}}
Additional keys in the dict returned from this function can include
`fitdict['fitinfo']['finalparams']` for the final model fit parameters
(this will be used by the checkplotserver if present),
`fitdict['fitinfo']['fitepoch']` for the minimum light epoch returned by
the model fit, among others.
In any case, the output dict of `lcfitfunc` will be copied to the output
checkplotdict as::
checkplotdict[lspmethod][periodind]['lcfit'][<fittype>]
for each phased light curve.
lcfitparams : dict
A dict containing the LC fit parameters to use when calling the function
provided in `lcfitfunc`. This contains key-val pairs corresponding to
parameter names and their respective initial values to be used by the
fit function.
externalplots : list of tuples of str
If provided, this is a list of 4-element tuples containing:
1. path to PNG of periodogram from an external period-finding method
2. path to PNG of best period phased LC from the external period-finder
3. path to PNG of 2nd-best phased LC from the external period-finder
4. path to PNG of 3rd-best phased LC from the external period-finder
This can be used to incorporate external period-finding method results
into the output checkplot pickle or exported PNG to allow for comparison
with astrobase results.
Example of externalplots::
[('/path/to/external/bls-periodogram.png',
'/path/to/external/bls-phasedlc-plot-bestpeak.png',
'/path/to/external/bls-phasedlc-plot-peak2.png',
'/path/to/external/bls-phasedlc-plot-peak3.png'),
('/path/to/external/pdm-periodogram.png',
'/path/to/external/pdm-phasedlc-plot-bestpeak.png',
'/path/to/external/pdm-phasedlc-plot-peak2.png',
'/path/to/external/pdm-phasedlc-plot-peak3.png'),
...]
If `externalplots` is provided here, these paths will be stored in the
output checkplotdict. The `checkplot.pkl_png.checkplot_pickle_to_png`
function can then automatically retrieve these plot PNGs and put
them into the exported checkplot PNG.
findercmap : str or matplotlib.cm.ColorMap object
The Colormap object to use for the finder chart image.
finderconvolve : astropy.convolution.Kernel object or None
If not None, the Kernel object to use for convolving the finder image.
findercachedir : str
The path to the astrobase cache directory for finder chart downloads
from the NASA SkyView service.
normto : {'globalmedian', 'zero'} or a float
These are specified as below:
- 'globalmedian' -> norms each mag to the global median of the LC column
- 'zero' -> norms each mag to zero
- a float -> norms each mag to this specified float value.
normmingap : float
This defines how much the difference between consecutive measurements is
allowed to be to consider them as parts of different timegroups. By
default it is set to 4.0 days.
sigclip : float or int or sequence of two floats/ints or None
If a single float or int, a symmetric sigma-clip will be performed using
the number provided as the sigma-multiplier to cut out from the input
time-series.
If a list of two ints/floats is provided, the function will perform an
'asymmetric' sigma-clip. The first element in this list is the sigma
value to use for fainter flux/mag values; the second element in this
list is the sigma value to use for brighter flux/mag values. For
example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma
dimmings and greater than 3-sigma brightenings. Here the meaning of
"dimming" and "brightening" is set by *physics* (not the magnitude
system), which is why the `magsarefluxes` kwarg must be correctly set.
If `sigclip` is None, no sigma-clipping will be performed, and the
time-series (with non-finite elems removed) will be passed through to
the output.
varepoch : 'min' or float or list of lists or None
The epoch to use for this phased light curve plot tile. If this is a
float, will use the provided value directly. If this is 'min', will
automatically figure out the time-of-minimum of the phased light
curve. If this is None, will use the mimimum value of `stimes` as the
epoch of the phased light curve plot. If this is a list of lists, will
use the provided value of `lspmethodind` to look up the current
period-finder method and the provided value of `periodind` to look up
the epoch associated with that method and the current period. This is
mostly only useful when `twolspmode` is True.
phasewrap : bool
If this is True, the phased time-series will be wrapped around
phase 0.0.
phasesort : bool
If True, will sort the phased light curve in order of increasing phase.
phasebin: float
The bin size to use to group together measurements closer than this
amount in phase. This is in units of phase. If this is a float, a
phase-binned version of the phased light curve will be overplotted on
top of the regular phased light curve.
minbinelems : int
The minimum number of elements required per phase bin to include it in
the phased LC plot.
plotxlim : sequence of two floats or None
The x-range (min, max) of the phased light curve plot. If None, will be
determined automatically.
xliminsetmode : bool
If this is True, the generated phased light curve plot will use the
values of `plotxlim` as the main plot x-axis limits (i.e. zoomed-in if
`plotxlim` is a range smaller than the full phase range), and will show
the full phased light curve plot as an smaller inset. Useful for
planetary transit light curves.
plotdpi : int
The resolution of the output plot PNGs in dots per inch.
bestperiodhighlight : str or None
If not None, this is a str with a matplotlib color specification to use
as the background color to highlight the phased light curve plot of the
'best' period and epoch combination. If None, no highlight will be
applied.
xgridlines : list of floats or None
If this is provided, must be a list of floats corresponding to the phase
values where to draw vertical dashed lines as a means of highlighting
these.
mindet : int
The minimum of observations the input object's mag/flux time-series must
have for this function to plot its light curve and phased light
curve. If the object has less than this number, no light curves will be
plotted, but the checkplotdict will still contain all of the other
information.
verbose : bool
If True, will indicate progress and warn about problems.
Returns
-------
dict
Returns a checkplotdict.
'''
# if an objectinfo dict is absent, we'll generate a fake objectid based on
# the second five time and mag array values. this should be OK to ID the
# object across repeated runs of this function with the same times, mags,
# errs, but should provide enough uniqueness otherwise (across different
# times/mags array inputs). this is all done so we can still save checkplots
# correctly to pickles after reviewing them using checkplotserver
try:
objuuid = hashlib.sha512(times[5:10].tostring() +
mags[5:10].tostring()).hexdigest()[:5]
except Exception as e:
if verbose:
LOGWARNING('times, mags, and errs may have too few items')
objuuid = hashlib.sha512(times.tostring() +
mags.tostring()).hexdigest()[:5]
if (objectinfo is None):
if verbose:
LOGWARNING('no objectinfo dict provided as kwarg, '
'adding a randomly generated objectid')
objectinfo = {'objectid':objuuid}
# special for HAT stuff, eventually we should add objectid to
# lcd['objectinfo'] there as well
elif (isinstance(objectinfo, dict) and 'hatid' in objectinfo):
objectinfo['objectid'] = objectinfo['hatid']
elif ((isinstance(objectinfo, dict) and 'objectid' not in objectinfo) or
(isinstance(objectinfo, dict) and 'objectid' in objectinfo and
(objectinfo['objectid'] is None or objectinfo['objectid'] == ''))):
if verbose:
LOGWARNING('adding a randomly generated objectid '
'since none was provided in objectinfo dict')
objectinfo['objectid'] = objuuid
# 0. get the objectinfo and finder chart and initialize the checkplotdict
checkplotdict = _pkl_finder_objectinfo(
objectinfo,
varinfo,
findercmap,
finderconvolve,
sigclip,
normto,
normmingap,
deredden_object=deredden_object,
custom_bandpasses=custom_bandpasses,
lclistpkl=lclistpkl,
nbrradiusarcsec=nbrradiusarcsec,
maxnumneighbors=maxnumneighbors,
plotdpi=plotdpi,
verbose=verbose,
findercachedir=findercachedir,
gaia_submit_timeout=gaia_submit_timeout,
gaia_submit_tries=gaia_submit_tries,
gaia_max_timeout=gaia_max_timeout,
gaia_mirror=gaia_mirror,
complete_query_later=complete_query_later,
fast_mode=fast_mode
)
# try again to get the right objectid
if (objectinfo and isinstance(objectinfo, dict) and
'objectid' in objectinfo and objectinfo['objectid']):
checkplotdict['objectid'] = objectinfo['objectid']
# filter the input times, mags, errs; do sigclipping and normalization
stimes, smags, serrs = sigclip_magseries(times,
mags,
errs,
magsarefluxes=magsarefluxes,
sigclip=sigclip)
# fail early if not enough light curve points
if ((stimes is None) or (smags is None) or (serrs is None) or
(stimes.size < 49) or (smags.size < 49) or (serrs.size < 49)):
LOGERROR("one or more of times, mags, errs appear to be None "
"after sig-clipping. are the measurements all nan? "
"can't make a checkplot for this objectid: %s" %
checkplotdict['objectid'])
checkplotdict['magseries'] = None
checkplotdict['status'] = 'failed: LC points appear to be all nan'
return checkplotdict
# this may fix some unpickling issues for astropy.table.Column objects
# we convert them back to ndarrays
if isinstance(stimes, AstColumn):
stimes = stimes.data
LOGWARNING('times is an astropy.table.Column object, '
'changing to numpy array because of '
'potential unpickling issues')
if isinstance(smags, AstColumn):
smags = smags.data
LOGWARNING('mags is an astropy.table.Column object, '
'changing to numpy array because of '
'potential unpickling issues')
if isinstance(serrs, AstColumn):
serrs = serrs.data
LOGWARNING('errs is an astropy.table.Column object, '
'changing to numpy array because of '
'potential unpickling issues')
# report on how sigclip went
if verbose:
LOGINFO('sigclip = %s: before = %s observations, '
'after = %s observations' %
(sigclip, len(times), len(stimes)))
# take care of the normalization
if normto is not False:
stimes, smags = normalize_magseries(stimes, smags,
normto=normto,
magsarefluxes=magsarefluxes,
mingap=normmingap)
# make sure we have some lightcurve points to plot after sigclip
if len(stimes) > mindet:
# 1. get the mag series plot using these filtered stimes, smags, serrs
magseriesdict = _pkl_magseries_plot(stimes, smags, serrs,
plotdpi=plotdpi,
magsarefluxes=magsarefluxes)
# update the checkplotdict
checkplotdict.update(magseriesdict)
# 2. for each lspinfo in lspinfolist, read it in (from pkl or pkl.gz
# if necessary), make the periodogram, make the phased mag series plots
# for each of the nbestperiods in each lspinfo dict
checkplot_pfmethods = []
for lspind, lspinfo in enumerate(lspinfolist):
# get the LSP from a pickle file transparently
if isinstance(lspinfo,str) and os.path.exists(lspinfo):
LOGINFO('loading LSP info from pickle %s' % lspinfo)
if '.gz' in lspinfo:
with gzip.open(lspinfo,'rb') as infd:
lspinfo = pickle.load(infd)
else:
with open(lspinfo,'rb') as infd:
lspinfo = pickle.load(infd)
# make the periodogram first
# we'll prepend the lspmethod index to allow for multiple same
# lspmethods
override_pfmethod = '%s-%s' % (lspind, lspinfo['method'])
periodogramdict = _pkl_periodogram(
lspinfo,
plotdpi=plotdpi,
override_pfmethod=override_pfmethod
)
# update the checkplotdict.
checkplotdict.update(periodogramdict)
# now, make the phased light curve plots for each of the
# nbestperiods from this periodogram
for nbpind, nbperiod in enumerate(
lspinfo['nbestperiods'][:nperiodstouse]
):
# if there's a function to use for fitting, do the fit
if lcfitfunc:
try:
if lcfitparams is None:
lcfitparams = {}
overplotfit = lcfitfunc(stimes,
smags,
serrs,
nbperiod,
**lcfitparams)
except Exception as e:
LOGEXCEPTION('the light curve fitting function '
'failed, not plotting a fit over the '
'phased light curve')
overplotfit = None
else:
overplotfit = None
# get the varepoch from a run of bls_snr if available. this
# allows us to use the correct transit center epochs if
# calculated using bls_snr and added back to the kbls function
# result dicts
if (lspinfo is not None and
'bls' in lspinfo['method'] and
'epochs' in lspinfo):
thisvarepoch = lspinfo['epochs'][nbpind]
if verbose:
LOGINFO(
'using pre-calculated transit-center epoch value: '
'%.6f from kbls.bls_snr() for period: %.5f'
% (thisvarepoch, nbperiod)
)
else:
thisvarepoch = varepoch
# this updates things as it runs
checkplotdict = _pkl_phased_magseries_plot(
checkplotdict,
lspinfo['method'],
nbpind,
stimes, smags, serrs,
nbperiod, thisvarepoch,
lspmethodind=lspind,
phasewrap=phasewrap,
phasesort=phasesort,
phasebin=phasebin,
minbinelems=minbinelems,
plotxlim=plotxlim,
overplotfit=overplotfit,
plotdpi=plotdpi,
bestperiodhighlight=bestperiodhighlight,
magsarefluxes=magsarefluxes,
xliminsetmode=xliminsetmode,
xgridlines=xgridlines,
verbose=verbose,
override_pfmethod=override_pfmethod,
)
# if there's an snr key for this lspmethod, add the info in it to
# the checkplotdict as well
if 'snr' in lspinfo:
if override_pfmethod in checkplotdict:
checkplotdict[override_pfmethod]['snr'] = (
lspinfo['snr']
)
if 'transitdepth' in lspinfo:
if override_pfmethod in checkplotdict:
checkplotdict[override_pfmethod]['transitdepth'] = (
lspinfo['transitdepth']
)
if 'transitduration' in lspinfo:
if override_pfmethod in checkplotdict:
checkplotdict[override_pfmethod]['transitduration'] = (
lspinfo['transitduration']
)
checkplot_pfmethods.append(override_pfmethod)
#
# end of processing each pfmethod
#
## update the checkplot dict with some other stuff that's needed by
## checkplotserver
# 3. add a comments key:val
checkplotdict['comments'] = None
# 4. calculate some variability features
if getvarfeatures is True:
checkplotdict['varinfo']['features'] = all_nonperiodic_features(
stimes,
smags,
serrs,
magsarefluxes=magsarefluxes,
)
# 5. add a signals key:val. this will be used by checkplotserver's
# pre-whitening and masking functions. these will write to
# checkplotdict['signals']['whiten'] and
# checkplotdict['signals']['mask'] respectively.
checkplotdict['signals'] = {}
# 6. add any externalplots if we have them
checkplotdict['externalplots'] = []
if (externalplots and
isinstance(externalplots, list) and
len(externalplots) > 0):
for externalrow in externalplots:
if all(os.path.exists(erowfile) for erowfile in externalrow):
if verbose:
LOGINFO('adding external plots: %s to checkplot dict' %
repr(externalrow))
checkplotdict['externalplots'].append(externalrow)
else:
LOGWARNING('could not add some external '
'plots in: %s to checkplot dict'
% repr(externalrow))
# 7. do any xmatches required
if xmatchinfo is not None:
checkplotdict = xmatch_external_catalogs(
checkplotdict,
xmatchinfo,
xmatchradiusarcsec=xmatchradiusarcsec
)
# the checkplotdict now contains everything we need
contents = sorted(list(checkplotdict.keys()))
checkplotdict['status'] = 'ok: contents are %s' % contents
if verbose:
LOGINFO('checkplot dict complete for %s' %
checkplotdict['objectid'])
LOGINFO('checkplot dict contents: %s' % contents)
# 8. update the pfmethods key
checkplotdict['pfmethods'] = checkplot_pfmethods
# otherwise, we don't have enough LC points, return nothing
else:
LOGERROR('not enough light curve points for %s, have %s, need %s' %
(checkplotdict['objectid'],len(stimes),mindet))
checkplotdict['magseries'] = None
checkplotdict['status'] = 'failed: not enough LC points'
# at the end, return the dict
return checkplotdict
|
def checkplot_pickle(
lspinfolist,
times,
mags,
errs,
fast_mode=False,
magsarefluxes=False,
nperiodstouse=3,
objectinfo=None,
deredden_object=True,
custom_bandpasses=None,
gaia_submit_timeout=10.0,
gaia_submit_tries=3,
gaia_max_timeout=180.0,
gaia_mirror=None,
complete_query_later=True,
varinfo=None,
getvarfeatures=True,
lclistpkl=None,
nbrradiusarcsec=60.0,
maxnumneighbors=5,
xmatchinfo=None,
xmatchradiusarcsec=3.0,
lcfitfunc=None,
lcfitparams=None,
externalplots=None,
findercmap='gray_r',
finderconvolve=None,
findercachedir='~/.astrobase/stamp-cache',
normto='globalmedian',
normmingap=4.0,
sigclip=4.0,
varepoch='min',
phasewrap=True,
phasesort=True,
phasebin=0.002,
minbinelems=7,
plotxlim=(-0.8,0.8),
xliminsetmode=False,
plotdpi=100,
bestperiodhighlight=None,
xgridlines=None,
mindet=99,
verbose=True,
outfile=None,
outgzip=False,
pickleprotocol=None,
returndict=False
):
'''This writes a multiple lspinfo checkplot to a (gzipped) pickle file.
This function can take input from multiple lspinfo dicts (e.g. a list of
output dicts or gzipped pickles of dicts from independent runs of BLS, PDM,
AoV, or GLS period-finders in periodbase).
NOTE: if `lspinfolist` contains more than one lspinfo object with the same
lspmethod ('pdm','gls','sls','aov','bls'), the latest one in the list will
overwrite the earlier ones.
The output dict contains all the plots (magseries and phased magseries),
periodograms, object information, variability information, light curves, and
phased light curves. This can be written to:
- a pickle with `checkplot_pickle` below
- a PNG with `checkplot.pkl_png.checkplot_pickle_to_png`
Parameters
----------
lspinfolist : list of dicts
This is a list of dicts containing period-finder results ('lspinfo'
dicts). These can be from any of the period-finder methods in
astrobase.periodbase. To incorporate external period-finder results into
checkplots, these dicts must be of the form below, including at least
the keys indicated here::
{'periods': np.array of all periods searched by the period-finder,
'lspvals': np.array of periodogram power value for each period,
'bestperiod': a float value that is the period with the highest
peak in the periodogram, i.e. the most-likely actual
period,
'method': a three-letter code naming the period-finder used; must
be one of the keys in the
`astrobase.periodbase.METHODLABELS` dict,
'nbestperiods': a list of the periods corresponding to periodogram
peaks (`nbestlspvals` below) to annotate on the
periodogram plot so they can be called out
visually,
'nbestlspvals': a list of the power values associated with
periodogram peaks to annotate on the periodogram
plot so they can be called out visually; should be
the same length as `nbestperiods` above}
`nbestperiods` and `nbestlspvals` in each lspinfo dict must have at
least as many elements as the `nperiodstouse` kwarg to this function.
times,mags,errs : np.arrays
The magnitude/flux time-series to process for this checkplot along with
their associated measurement errors.
fast_mode : bool or float
This runs the external catalog operations in a "fast" mode, with short
timeouts and not trying to hit external catalogs that take a long time
to respond.
If this is set to True, the default settings for the external requests
will then become::
skyview_lookup = False
skyview_timeout = 10.0
skyview_retry_failed = False
dust_timeout = 10.0
gaia_submit_timeout = 7.0
gaia_max_timeout = 10.0
gaia_submit_tries = 2
complete_query_later = False
search_simbad = False
If this is a float, will run in "fast" mode with the provided timeout
value in seconds and the following settings::
skyview_lookup = True
skyview_timeout = fast_mode
skyview_retry_failed = False
dust_timeout = fast_mode
gaia_submit_timeout = 0.66*fast_mode
gaia_max_timeout = fast_mode
gaia_submit_tries = 2
complete_query_later = False
search_simbad = False
magsarefluxes : bool
If True, indicates the input time-series is fluxes and not mags so the
plot y-axis direction and range can be set appropriately.
nperiodstouse : int
This controls how many 'best' periods to make phased LC plots for. By
default, this is the 3 best. If this is set to None, all 'best' periods
present in each lspinfo dict's 'nbestperiods' key will be processed for
this checkplot.
objectinfo : dict or None
If provided, this is a dict containing information on the object whose
light curve is being processed. This function will then be able to look
up and download a finder chart for this object and write that to the
output checkplotdict. External services such as GAIA, SIMBAD, TIC@MAST,
etc. will also be used to look up this object by its coordinates, and
will add in information available from those services.
The `objectinfo` dict must be of the form and contain at least the keys
described below::
{'objectid': the name of the object,
'ra': the right ascension of the object in decimal degrees,
'decl': the declination of the object in decimal degrees,
'ndet': the number of observations of this object}
You can also provide magnitudes and proper motions of the object using
the following keys and the appropriate values in the `objectinfo`
dict. These will be used to calculate colors, total and reduced proper
motion, etc. and display these in the output checkplot PNG::
'pmra' -> the proper motion in mas/yr in right ascension,
'pmdecl' -> the proper motion in mas/yr in the declination,
'umag' -> U mag -> colors: U-B, U-V, U-g
'bmag' -> B mag -> colors: U-B, B-V
'vmag' -> V mag -> colors: U-V, B-V, V-R, V-I, V-K
'rmag' -> R mag -> colors: V-R, R-I
'imag' -> I mag -> colors: g-I, V-I, R-I, B-I
'jmag' -> 2MASS J mag -> colors: J-H, J-K, g-J, i-J
'hmag' -> 2MASS H mag -> colors: J-H, H-K
'kmag' -> 2MASS Ks mag -> colors: g-Ks, H-Ks, J-Ks, V-Ks
'sdssu' -> SDSS u mag -> colors: u-g, u-V
'sdssg' -> SDSS g mag -> colors: g-r, g-i, g-K, u-g, U-g, g-J
'sdssr' -> SDSS r mag -> colors: r-i, g-r
'sdssi' -> SDSS i mag -> colors: r-i, i-z, g-i, i-J, i-W1
'sdssz' -> SDSS z mag -> colors: i-z, z-W2, g-z
'ujmag' -> UKIRT J mag -> colors: J-H, H-K, J-K, g-J, i-J
'uhmag' -> UKIRT H mag -> colors: J-H, H-K
'ukmag' -> UKIRT K mag -> colors: g-K, H-K, J-K, V-K
'irac1' -> Spitzer IRAC1 mag -> colors: i-I1, I1-I2
'irac2' -> Spitzer IRAC2 mag -> colors: I1-I2, I2-I3
'irac3' -> Spitzer IRAC3 mag -> colors: I2-I3
'irac4' -> Spitzer IRAC4 mag -> colors: I3-I4
'wise1' -> WISE W1 mag -> colors: i-W1, W1-W2
'wise2' -> WISE W2 mag -> colors: W1-W2, W2-W3
'wise3' -> WISE W3 mag -> colors: W2-W3
'wise4' -> WISE W4 mag -> colors: W3-W4
If you have magnitude measurements in other bands, use the
`custom_bandpasses` kwarg to pass these in.
If this is None, no object information will be incorporated into the
checkplot (kind of making it effectively useless for anything other than
glancing at the phased light curves at various 'best' periods from the
period-finder results).
deredden_object : bool
If this is True, will use the 2MASS DUST service to get extinction
coefficients in various bands, and then try to deredden the magnitudes
and colors of the object already present in the checkplot's objectinfo
dict.
custom_bandpasses : dict
This is a dict used to provide custom bandpass definitions for any
magnitude measurements in the objectinfo dict that are not automatically
recognized by :py:func:`astrobase.varclass.starfeatures.color_features`.
gaia_submit_timeout : float
Sets the timeout in seconds to use when submitting a request to look up
the object's information to the GAIA service. Note that if `fast_mode`
is set, this is ignored.
gaia_submit_tries : int
Sets the maximum number of times the GAIA services will be contacted to
obtain this object's information. If `fast_mode` is set, this is
ignored, and the services will be contacted only once (meaning that a
failure to respond will be silently ignored and no GAIA data will be
added to the checkplot's objectinfo dict).
gaia_max_timeout : float
Sets the timeout in seconds to use when waiting for the GAIA service to
respond to our request for the object's information. Note that if
`fast_mode` is set, this is ignored.
gaia_mirror : str or None
This sets the GAIA mirror to use. This is a key in the
`services.gaia.GAIA_URLS` dict which defines the URLs to hit for each
mirror.
complete_query_later : bool
If this is True, saves the state of GAIA queries that are not yet
complete when `gaia_max_timeout` is reached while waiting for the GAIA
service to respond to our request. A later call for GAIA info on the
same object will attempt to pick up the results from the existing query
if it's completed. If `fast_mode` is True, this is ignored.
varinfo : dict
If this is None, a blank dict of the form below will be added to the
checkplotdict::
{'objectisvar': None -> variability flag (None indicates unset),
'vartags': CSV str containing variability type tags from review,
'varisperiodic': None -> periodic variability flag (None -> unset),
'varperiod': the period associated with the periodic variability,
'varepoch': the epoch associated with the periodic variability}
If you provide a dict matching this format in this kwarg, this will be
passed unchanged to the output checkplotdict produced.
getvarfeatures : bool
If this is True, several light curve variability features for this
object will be calculated and added to the output checkpotdict as
checkplotdict['varinfo']['features']. This uses the function
`varclass.varfeatures.all_nonperiodic_features` so see its docstring for
the measures that are calculated (e.g. Stetson J indices, dispersion
measures, etc.)
lclistpkl : dict or str
If this is provided, must be a dict resulting from reading a catalog
produced by the `lcproc.catalogs.make_lclist` function or a str path
pointing to the pickle file produced by that function. This catalog is
used to find neighbors of the current object in the current light curve
collection. Looking at neighbors of the object within the radius
specified by `nbrradiusarcsec` is useful for light curves produced by
instruments that have a large pixel scale, so are susceptible to
blending of variability and potential confusion of neighbor variability
with that of the actual object being looked at. If this is None, no
neighbor lookups will be performed.
nbrradiusarcsec : flaot
The radius in arcseconds to use for a search conducted around the
coordinates of this object to look for any potential confusion and
blending of variability amplitude caused by their proximity.
maxnumneighbors : int
The maximum number of neighbors that will have their light curves and
magnitudes noted in this checkplot as potential blends with the target
object.
xmatchinfo : str or dict
This is either the xmatch dict produced by the function
`load_xmatch_external_catalogs` above, or the path to the xmatch info
pickle file produced by that function.
xmatchradiusarcsec : float
This is the cross-matching radius to use in arcseconds.
lcfitfunc : Python function or None
If provided, this should be a Python function that is used to fit a
model to the light curve. This fit is then overplotted for each phased
light curve in the checkplot. This function should have the following
signature:
`def lcfitfunc(times, mags, errs, period, **lcfitparams)`
where `lcfitparams` encapsulates all external parameters (i.e. number of
knots for a spline function, the degree of a Legendre polynomial fit,
etc., planet transit parameters) This function should return a Python
dict with the following structure (similar to the functions in
`astrobase.lcfit`) and at least the keys below::
{'fittype':<str: name of fit method>,
'fitchisq':<float: the chi-squared value of the fit>,
'fitredchisq':<float: the reduced chi-squared value of the fit>,
'fitinfo':{'fitmags':<ndarray: model mags/fluxes from fit func>},
'magseries':{'times':<ndarray: times where fitmags are evaluated>}}
Additional keys in the dict returned from this function can include
`fitdict['fitinfo']['finalparams']` for the final model fit parameters
(this will be used by the checkplotserver if present),
`fitdict['fitinfo']['fitepoch']` for the minimum light epoch returned by
the model fit, among others.
In any case, the output dict of `lcfitfunc` will be copied to the output
checkplotdict as
`checkplotdict[lspmethod][periodind]['lcfit'][<fittype>]` for each
phased light curve.
lcfitparams : dict
A dict containing the LC fit parameters to use when calling the function
provided in `lcfitfunc`. This contains key-val pairs corresponding to
parameter names and their respective initial values to be used by the
fit function.
externalplots : list of tuples of str
If provided, this is a list of 4-element tuples containing:
1. path to PNG of periodogram from an external period-finding method
2. path to PNG of best period phased LC from the external period-finder
3. path to PNG of 2nd-best phased LC from the external period-finder
4. path to PNG of 3rd-best phased LC from the external period-finder
This can be used to incorporate external period-finding method results
into the output checkplot pickle or exported PNG to allow for comparison
with astrobase results. Example of `externalplots`::
[('/path/to/external/bls-periodogram.png',
'/path/to/external/bls-phasedlc-plot-bestpeak.png',
'/path/to/external/bls-phasedlc-plot-peak2.png',
'/path/to/external/bls-phasedlc-plot-peak3.png'),
('/path/to/external/pdm-periodogram.png',
'/path/to/external/pdm-phasedlc-plot-bestpeak.png',
'/path/to/external/pdm-phasedlc-plot-peak2.png',
'/path/to/external/pdm-phasedlc-plot-peak3.png'),
...]
If `externalplots` is provided here, these paths will be stored in the
output checkplotdict. The `checkplot.pkl_png.checkplot_pickle_to_png`
function can then automatically retrieve these plot PNGs and put
them into the exported checkplot PNG.
findercmap : str or matplotlib.cm.ColorMap object
The Colormap object to use for the finder chart image.
finderconvolve : astropy.convolution.Kernel object or None
If not None, the Kernel object to use for convolving the finder image.
findercachedir : str
The path to the astrobase cache directory for finder chart downloads
from the NASA SkyView service.
normto : {'globalmedian', 'zero'} or a float
This specifies the normalization target::
'globalmedian' -> norms each mag to global median of the LC column
'zero' -> norms each mag to zero
a float -> norms each mag to this specified float value.
normmingap : float
This defines how much the difference between consecutive measurements is
allowed to be to consider them as parts of different timegroups. By
default it is set to 4.0 days.
sigclip : float or int or sequence of two floats/ints or None
If a single float or int, a symmetric sigma-clip will be performed using
the number provided as the sigma-multiplier to cut out from the input
time-series.
If a list of two ints/floats is provided, the function will perform an
'asymmetric' sigma-clip. The first element in this list is the sigma
value to use for fainter flux/mag values; the second element in this
list is the sigma value to use for brighter flux/mag values. For
example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma
dimmings and greater than 3-sigma brightenings. Here the meaning of
"dimming" and "brightening" is set by *physics* (not the magnitude
system), which is why the `magsarefluxes` kwarg must be correctly set.
If `sigclip` is None, no sigma-clipping will be performed, and the
time-series (with non-finite elems removed) will be passed through to
the output.
varepoch : 'min' or float or list of lists or None
The epoch to use for this phased light curve plot tile. If this is a
float, will use the provided value directly. If this is 'min', will
automatically figure out the time-of-minimum of the phased light
curve. If this is None, will use the mimimum value of `stimes` as the
epoch of the phased light curve plot. If this is a list of lists, will
use the provided value of `lspmethodind` to look up the current
period-finder method and the provided value of `periodind` to look up
the epoch associated with that method and the current period. This is
mostly only useful when `twolspmode` is True.
phasewrap : bool
If this is True, the phased time-series will be wrapped around
phase 0.0.
phasesort : bool
If True, will sort the phased light curve in order of increasing phase.
phasebin: float
The bin size to use to group together measurements closer than this
amount in phase. This is in units of phase. If this is a float, a
phase-binned version of the phased light curve will be overplotted on
top of the regular phased light curve.
minbinelems : int
The minimum number of elements required per phase bin to include it in
the phased LC plot.
plotxlim : sequence of two floats or None
The x-range (min, max) of the phased light curve plot. If None, will be
determined automatically.
xliminsetmode : bool
If this is True, the generated phased light curve plot will use the
values of `plotxlim` as the main plot x-axis limits (i.e. zoomed-in if
`plotxlim` is a range smaller than the full phase range), and will show
the full phased light curve plot as an smaller inset. Useful for
planetary transit light curves.
plotdpi : int
The resolution of the output plot PNGs in dots per inch.
bestperiodhighlight : str or None
If not None, this is a str with a matplotlib color specification to use
as the background color to highlight the phased light curve plot of the
'best' period and epoch combination. If None, no highlight will be
applied.
xgridlines : list of floats or None
If this is provided, must be a list of floats corresponding to the phase
values where to draw vertical dashed lines as a means of highlighting
these.
mindet : int
The minimum of observations the input object's mag/flux time-series must
have for this function to plot its light curve and phased light
curve. If the object has less than this number, no light curves will be
plotted, but the checkplotdict will still contain all of the other
information.
verbose : bool
If True, will indicate progress and warn about problems.
outfile : str or None
The name of the output checkplot pickle file. If this is None, will
write the checkplot pickle to file called 'checkplot.pkl' in the current
working directory.
outgzip : bool
This controls whether to gzip the output pickle. It turns out that this
is the slowest bit in the output process, so if you're after speed, best
not to use this. This is False by default since it turns out that gzip
actually doesn't save that much space (29 MB vs. 35 MB for the average
checkplot pickle).
pickleprotocol : int or None
This sets the pickle file protocol to use when writing the pickle:
If None, will choose a protocol using the following rules:
- 4 -> default in Python >= 3.4 - fast but incompatible with Python 2
- 3 -> default in Python 3.0-3.3 - mildly fast
- 2 -> default in Python 2 - very slow, but compatible with Python 2/3
The default protocol kwarg is None, this will make an automatic choice
for pickle protocol that's best suited for the version of Python in
use. Note that this will make pickles generated by Py3 incompatible with
Py2.
returndict : bool
If this is True, will return the checkplotdict instead of returning the
filename of the output checkplot pickle.
Returns
-------
dict or str
If returndict is False, will return the path to the generated checkplot
pickle file. If returndict is True, will return the checkplotdict
instead.
'''
# call checkplot_dict for most of the work
checkplotdict = checkplot_dict(
lspinfolist,
times,
mags,
errs,
magsarefluxes=magsarefluxes,
nperiodstouse=nperiodstouse,
objectinfo=objectinfo,
deredden_object=deredden_object,
custom_bandpasses=custom_bandpasses,
gaia_submit_timeout=gaia_submit_timeout,
gaia_submit_tries=gaia_submit_tries,
gaia_max_timeout=gaia_max_timeout,
gaia_mirror=gaia_mirror,
complete_query_later=complete_query_later,
varinfo=varinfo,
getvarfeatures=getvarfeatures,
lclistpkl=lclistpkl,
nbrradiusarcsec=nbrradiusarcsec,
maxnumneighbors=maxnumneighbors,
xmatchinfo=xmatchinfo,
xmatchradiusarcsec=xmatchradiusarcsec,
lcfitfunc=lcfitfunc,
lcfitparams=lcfitparams,
externalplots=externalplots,
findercmap=findercmap,
finderconvolve=finderconvolve,
findercachedir=findercachedir,
normto=normto,
normmingap=normmingap,
sigclip=sigclip,
varepoch=varepoch,
phasewrap=phasewrap,
phasesort=phasesort,
phasebin=phasebin,
minbinelems=minbinelems,
plotxlim=plotxlim,
xliminsetmode=xliminsetmode,
plotdpi=plotdpi,
bestperiodhighlight=bestperiodhighlight,
xgridlines=xgridlines,
mindet=mindet,
verbose=verbose,
fast_mode=fast_mode
)
# for Python >= 3.4, use v4
if ((sys.version_info[0:2] >= (3,4) and not pickleprotocol) or
(pickleprotocol > 2)):
pickleprotocol = 4
elif ((sys.version_info[0:2] >= (3,0) and not pickleprotocol) or
(pickleprotocol > 2)):
pickleprotocol = 3
# for Python == 2.7; use v2
elif sys.version_info[0:2] == (2,7) and not pickleprotocol:
pickleprotocol = 2
# otherwise, if left unspecified, use the slowest but most compatible
# protocol. this will be readable by all (most?) Pythons
elif not pickleprotocol:
pickleprotocol = 0
# generate the output file path
if outgzip:
# generate the outfile filename
if (not outfile and
len(lspinfolist) > 0 and
isinstance(lspinfolist[0], str)):
plotfpath = os.path.join(os.path.dirname(lspinfolist[0]),
'checkplot-%s.pkl.gz' %
checkplotdict['objectid'])
elif outfile:
plotfpath = outfile
else:
plotfpath = 'checkplot.pkl.gz'
else:
# generate the outfile filename
if (not outfile and
len(lspinfolist) > 0 and
isinstance(lspinfolist[0], str)):
plotfpath = os.path.join(os.path.dirname(lspinfolist[0]),
'checkplot-%s.pkl' %
checkplotdict['objectid'])
elif outfile:
plotfpath = outfile
else:
plotfpath = 'checkplot.pkl'
# write the completed checkplotdict to a gzipped pickle
picklefname = _write_checkplot_picklefile(checkplotdict,
outfile=plotfpath,
protocol=pickleprotocol,
outgzip=outgzip)
# at the end, return the dict and filename if asked for
if returndict:
if verbose:
LOGINFO('checkplot done -> %s' % picklefname)
return checkplotdict, picklefname
# otherwise, just return the filename
else:
# just to make sure: free up space
del checkplotdict
if verbose:
LOGINFO('checkplot done -> %s' % picklefname)
return picklefname
|
def checkplot_pickle_update(
currentcp,
updatedcp,
outfile=None,
outgzip=False,
pickleprotocol=None,
verbose=True
):
'''This updates the current checkplotdict with updated values provided.
Parameters
----------
currentcp : dict or str
This is either a checkplotdict produced by `checkplot_pickle` above or a
checkplot pickle file produced by the same function. This checkplot will
be updated from the `updatedcp` checkplot.
updatedcp : dict or str
This is either a checkplotdict produced by `checkplot_pickle` above or a
checkplot pickle file produced by the same function. This checkplot will
be the source of the update to the `currentcp` checkplot.
outfile : str or None
The name of the output checkplot pickle file. The function will output
the new checkplot gzipped pickle file to `outfile` if outfile is a
filename. If `currentcp` is a file and `outfile`, this will be set to
that filename, so the function updates it in place.
outgzip : bool
This controls whether to gzip the output pickle. It turns out that this
is the slowest bit in the output process, so if you're after speed, best
not to use this. This is False by default since it turns out that gzip
actually doesn't save that much space (29 MB vs. 35 MB for the average
checkplot pickle).
pickleprotocol : int or None
This sets the pickle file protocol to use when writing the pickle:
If None, will choose a protocol using the following rules:
- 4 -> default in Python >= 3.4 - fast but incompatible with Python 2
- 3 -> default in Python 3.0-3.3 - mildly fast
- 2 -> default in Python 2 - very slow, but compatible with Python 2/3
The default protocol kwarg is None, this will make an automatic choice
for pickle protocol that's best suited for the version of Python in
use. Note that this will make pickles generated by Py3 incompatible with
Py2.
verbose : bool
If True, will indicate progress and warn about problems.
Returns
-------
str
The path to the updated checkplot pickle file. If `outfile` was None and
`currentcp` was a filename, this will return `currentcp` to indicate
that the checkplot pickle file was updated in place.
'''
# break out python 2.7 and > 3 nonsense
if sys.version_info[:2] > (3,2):
# generate the outfile filename
if not outfile and isinstance(currentcp,str):
plotfpath = currentcp
elif outfile:
plotfpath = outfile
elif isinstance(currentcp, dict) and currentcp['objectid']:
if outgzip:
plotfpath = 'checkplot-%s.pkl.gz' % currentcp['objectid']
else:
plotfpath = 'checkplot-%s.pkl' % currentcp['objectid']
else:
# we'll get this later below
plotfpath = None
if (isinstance(currentcp, str) and os.path.exists(currentcp)):
cp_current = _read_checkplot_picklefile(currentcp)
elif isinstance(currentcp, dict):
cp_current = currentcp
else:
LOGERROR('currentcp: %s of type %s is not a '
'valid checkplot filename (or does not exist), or a dict' %
(os.path.abspath(currentcp), type(currentcp)))
return None
if (isinstance(updatedcp, str) and os.path.exists(updatedcp)):
cp_updated = _read_checkplot_picklefile(updatedcp)
elif isinstance(updatedcp, dict):
cp_updated = updatedcp
else:
LOGERROR('updatedcp: %s of type %s is not a '
'valid checkplot filename (or does not exist), or a dict' %
(os.path.abspath(updatedcp), type(updatedcp)))
return None
# check for unicode in python 2.7
else:
# generate the outfile filename
if (not outfile and
(isinstance(currentcp, str) or isinstance(currentcp, unicode))):
plotfpath = currentcp
elif outfile:
plotfpath = outfile
elif isinstance(currentcp, dict) and currentcp['objectid']:
if outgzip:
plotfpath = 'checkplot-%s.pkl.gz' % currentcp['objectid']
else:
plotfpath = 'checkplot-%s.pkl' % currentcp['objectid']
else:
# we'll get this later below
plotfpath = None
# get the current checkplotdict
if ((isinstance(currentcp, str) or isinstance(currentcp, unicode)) and
os.path.exists(currentcp)):
cp_current = _read_checkplot_picklefile(currentcp)
elif isinstance(currentcp,dict):
cp_current = currentcp
else:
LOGERROR('currentcp: %s of type %s is not a '
'valid checkplot filename (or does not exist), or a dict' %
(os.path.abspath(currentcp), type(currentcp)))
return None
# get the updated checkplotdict
if ((isinstance(updatedcp, str) or isinstance(updatedcp, unicode)) and
os.path.exists(updatedcp)):
cp_updated = _read_checkplot_picklefile(updatedcp)
elif isinstance(updatedcp, dict):
cp_updated = updatedcp
else:
LOGERROR('updatedcp: %s of type %s is not a '
'valid checkplot filename (or does not exist), or a dict' %
(os.path.abspath(updatedcp), type(updatedcp)))
return None
# do the update using python's dict update mechanism
# this requires updated to be in the same checkplotdict format as current
# all keys in current will now be from updated
cp_current.update(cp_updated)
# figure out the plotfpath if we haven't by now
if not plotfpath and outgzip:
plotfpath = 'checkplot-%s.pkl.gz' % cp_current['objectid']
elif (not plotfpath) and (not outgzip):
plotfpath = 'checkplot-%s.pkl' % cp_current['objectid']
# make sure we write the correct postfix
if plotfpath.endswith('.gz'):
outgzip = True
# write the new checkplotdict
return _write_checkplot_picklefile(cp_current,
outfile=plotfpath,
outgzip=outgzip,
protocol=pickleprotocol)
|
def get_frequency_grid(times,
samplesperpeak=5,
nyquistfactor=5,
minfreq=None,
maxfreq=None,
returnf0dfnf=False):
'''This calculates a frequency grid for the period finding functions in this
module.
Based on the autofrequency function in astropy.stats.lombscargle.
http://docs.astropy.org/en/stable/_modules/astropy/stats/lombscargle/core.html#LombScargle.autofrequency
'''
baseline = times.max() - times.min()
nsamples = times.size
df = 1. / baseline / samplesperpeak
if minfreq is not None:
f0 = minfreq
else:
f0 = 0.5 * df
if maxfreq is not None:
Nf = int(npceil((maxfreq - f0) / df))
else:
Nf = int(0.5 * samplesperpeak * nyquistfactor * nsamples)
if returnf0dfnf:
return f0, df, Nf, f0 + df * nparange(Nf)
else:
return f0 + df * nparange(Nf)
|
def pwd_phasebin(phases, mags, binsize=0.002, minbin=9):
'''
This bins the phased mag series using the given binsize.
'''
bins = np.arange(0.0, 1.0, binsize)
binnedphaseinds = npdigitize(phases, bins)
binnedphases, binnedmags = [], []
for x in npunique(binnedphaseinds):
thisbin_inds = binnedphaseinds == x
thisbin_phases = phases[thisbin_inds]
thisbin_mags = mags[thisbin_inds]
if thisbin_inds.size > minbin:
binnedphases.append(npmedian(thisbin_phases))
binnedmags.append(npmedian(thisbin_mags))
return np.array(binnedphases), np.array(binnedmags)
|
def pdw_worker(task):
'''
This is the parallel worker for the function below.
task[0] = frequency for this worker
task[1] = times array
task[2] = mags array
task[3] = fold_time
task[4] = j_range
task[5] = keep_threshold_1
task[6] = keep_threshold_2
task[7] = phasebinsize
we don't need errs for the worker.
'''
frequency = task[0]
times, modmags = task[1], task[2]
fold_time = task[3]
j_range = range(task[4])
keep_threshold_1 = task[5]
keep_threshold_2 = task[6]
phasebinsize = task[7]
try:
period = 1.0/frequency
# use the common phaser to phase and sort the mag
phased = phase_magseries(times,
modmags,
period,
fold_time,
wrap=False,
sort=True)
# bin in phase if requested, this turns this into a sort of PDM method
if phasebinsize is not None and phasebinsize > 0:
bphased = pwd_phasebin(phased['phase'],
phased['mags'],
binsize=phasebinsize)
phase_sorted = bphased[0]
mod_mag_sorted = bphased[1]
j_range = range(len(mod_mag_sorted) - 1)
else:
phase_sorted = phased['phase']
mod_mag_sorted = phased['mags']
# now calculate the string length
rolledmags = nproll(mod_mag_sorted,1)
rolledphases = nproll(phase_sorted,1)
strings = (
(rolledmags - mod_mag_sorted)*(rolledmags - mod_mag_sorted) +
(rolledphases - phase_sorted)*(rolledphases - phase_sorted)
)
strings[0] = (
((mod_mag_sorted[0] - mod_mag_sorted[-1]) *
(mod_mag_sorted[0] - mod_mag_sorted[-1])) +
((phase_sorted[0] - phase_sorted[-1] + 1) *
(phase_sorted[0] - phase_sorted[-1] + 1))
)
strlen = npsum(npsqrt(strings))
if (keep_threshold_1 < strlen < keep_threshold_2):
p_goodflag = True
else:
p_goodflag = False
return (period, strlen, p_goodflag)
except Exception as e:
LOGEXCEPTION('error in DWP')
return(period, npnan, False)
|
def pdw_period_find(times,
mags,
errs,
autofreq=True,
init_p=None,
end_p=None,
f_step=1.0e-4,
phasebinsize=None,
sigclip=10.0,
nworkers=None,
verbose=False):
'''This is the parallel version of the function above.
Uses the string length method in Dworetsky 1983 to calculate the period of a
time-series of magnitude measurements and associated magnitude errors. This
can optionally bin in phase to try to speed up the calculation.
PARAMETERS:
time: series of times at which mags were measured (usually some form of JD)
mag: timeseries of magnitudes (np.array)
err: associated errs per magnitude measurement (np.array)
init_p, end_p: interval to search for periods between (both ends inclusive)
f_step: step in frequency [days^-1] to use
RETURNS:
tuple of the following form:
(periods (np.array),
string_lengths (np.array),
good_period_mask (boolean array))
'''
# remove nans
find = npisfinite(times) & npisfinite(mags) & npisfinite(errs)
ftimes, fmags, ferrs = times[find], mags[find], errs[find]
mod_mags = (fmags - npmin(fmags))/(2.0*(npmax(fmags) - npmin(fmags))) - 0.25
if len(ftimes) > 9 and len(fmags) > 9 and len(ferrs) > 9:
# get the median and stdev = 1.483 x MAD
median_mag = np.median(fmags)
stddev_mag = (np.median(np.abs(fmags - median_mag))) * 1.483
# sigclip next
if sigclip:
sigind = (np.abs(fmags - median_mag)) < (sigclip * stddev_mag)
stimes = ftimes[sigind]
smags = fmags[sigind]
serrs = ferrs[sigind]
LOGINFO('sigclip = %s: before = %s observations, '
'after = %s observations' %
(sigclip, len(times), len(stimes)))
else:
stimes = ftimes
smags = fmags
serrs = ferrs
# make sure there are enough points to calculate a spectrum
if len(stimes) > 9 and len(smags) > 9 and len(serrs) > 9:
# get the frequencies to use
if init_p:
endf = 1.0/init_p
else:
# default start period is 0.1 day
endf = 1.0/0.1
if end_p:
startf = 1.0/end_p
else:
# default end period is length of time series
startf = 1.0/(stimes.max() - stimes.min())
# if we're not using autofreq, then use the provided frequencies
if not autofreq:
frequencies = np.arange(startf, endf, stepsize)
LOGINFO(
'using %s frequency points, start P = %.3f, end P = %.3f' %
(frequencies.size, 1.0/endf, 1.0/startf)
)
else:
# this gets an automatic grid of frequencies to use
frequencies = get_frequency_grid(stimes,
minfreq=startf,
maxfreq=endf)
LOGINFO(
'using autofreq with %s frequency points, '
'start P = %.3f, end P = %.3f' %
(frequencies.size,
1.0/frequencies.max(),
1.0/frequencies.min())
)
# set up some internal stuff
fold_time = npmin(ftimes) # fold at the first time element
j_range = len(fmags)-1
epsilon = 2.0 * npmean(ferrs)
delta_l = 0.34 * (epsilon - 0.5*(epsilon**2)) * (len(ftimes) -
npsqrt(10.0/epsilon))
keep_threshold_1 = 1.6 + 1.2*delta_l
l = 0.212*len(ftimes)
sig_l = len(ftimes)/37.5
keep_threshold_2 = l + 4.0*sig_l
# generate the tasks
tasks = [(x,
ftimes,
mod_mags,
fold_time,
j_range,
keep_threshold_1,
keep_threshold_2,
phasebinsize) for x in frequencies]
# fire up the pool and farm out the tasks
if (not nworkers) or (nworkers > NCPUS):
nworkers = NCPUS
LOGINFO('using %s workers...' % nworkers)
pool = Pool(nworkers)
strlen_results = pool.map(pdw_worker, tasks)
pool.close()
pool.join()
del pool
periods, strlens, goodflags = zip(*strlen_results)
periods, strlens, goodflags = (np.array(periods),
np.array(strlens),
np.array(goodflags))
strlensort = npargsort(strlens)
nbeststrlens = strlens[strlensort[:5]]
nbestperiods = periods[strlensort[:5]]
nbestflags = goodflags[strlensort[:5]]
bestperiod = nbestperiods[0]
beststrlen = nbeststrlens[0]
bestflag = nbestflags[0]
return {'bestperiod':bestperiod,
'beststrlen':beststrlen,
'bestflag':bestflag,
'nbeststrlens':nbeststrlens,
'nbestperiods':nbestperiods,
'nbestflags':nbestflags,
'strlens':strlens,
'periods':periods,
'goodflags':goodflags}
else:
LOGERROR('no good detections for these times and mags, skipping...')
return {'bestperiod':npnan,
'beststrlen':npnan,
'bestflag':npnan,
'nbeststrlens':None,
'nbestperiods':None,
'nbestflags':None,
'strlens':None,
'periods':None,
'goodflags':None}
else:
LOGERROR('no good detections for these times and mags, skipping...')
return {'bestperiod':npnan,
'beststrlen':npnan,
'bestflag':npnan,
'nbeststrlens':None,
'nbestperiods':None,
'nbestflags':None,
'strlens':None,
'periods':None,
'goodflags':None}
|
def townsend_lombscargle_value(times, mags, omega):
'''
This calculates the periodogram value for each omega (= 2*pi*f). Mags must
be normalized to zero with variance scaled to unity.
'''
cos_omegat = npcos(omega*times)
sin_omegat = npsin(omega*times)
xc = npsum(mags*cos_omegat)
xs = npsum(mags*sin_omegat)
cc = npsum(cos_omegat*cos_omegat)
ss = npsum(sin_omegat*sin_omegat)
cs = npsum(cos_omegat*sin_omegat)
tau = nparctan(2*cs/(cc - ss))/(2*omega)
ctau = npcos(omega*tau)
stau = npsin(omega*tau)
leftsumtop = (ctau*xc + stau*xs)*(ctau*xc + stau*xs)
leftsumbot = ctau*ctau*cc + 2.0*ctau*stau*cs + stau*stau*ss
leftsum = leftsumtop/leftsumbot
rightsumtop = (ctau*xs - stau*xc)*(ctau*xs - stau*xc)
rightsumbot = ctau*ctau*ss - 2.0*ctau*stau*cs + stau*stau*cc
rightsum = rightsumtop/rightsumbot
pval = 0.5*(leftsum + rightsum)
return pval
|
def parallel_townsend_lsp(times, mags, startp, endp,
stepsize=1.0e-4,
nworkers=4):
'''
This calculates the Lomb-Scargle periodogram for the frequencies
corresponding to the period interval (startp, endp) using a frequency step
size of stepsize cycles/day. This uses the algorithm in Townsend 2010.
'''
# make sure there are no nans anywhere
finiteind = np.isfinite(times) & np.isfinite(mags)
ftimes, fmags = times[finiteind], mags[finiteind]
# renormalize the mags to zero and scale them so that the variance = 1
nmags = (fmags - np.median(fmags))/np.std(fmags)
startf = 1.0/endp
endf = 1.0/startp
omegas = 2*np.pi*np.arange(startf, endf, stepsize)
# parallel map the lsp calculations
if (not nworkers) or (nworkers > NCPUS):
nworkers = NCPUS
LOGINFO('using %s workers...' % nworkers)
pool = Pool(nworkers)
tasks = [(ftimes, nmags, x) for x in omegas]
lsp = pool.map(townsend_lombscargle_wrapper, tasks)
pool.close()
pool.join()
return np.array(omegas), np.array(lsp)
|
def scipylsp_parallel(times,
mags,
errs, # ignored but for consistent API
startp,
endp,
nbestpeaks=5,
periodepsilon=0.1, # 0.1
stepsize=1.0e-4,
nworkers=4,
sigclip=None,
timebin=None):
'''
This uses the LSP function from the scipy library, which is fast as hell. We
try to make it faster by running LSP for sections of the omegas array in
parallel.
'''
# make sure there are no nans anywhere
finiteind = np.isfinite(mags) & np.isfinite(errs)
ftimes, fmags, ferrs = times[finiteind], mags[finiteind], errs[finiteind]
if len(ftimes) > 0 and len(fmags) > 0:
# sigclip the lightcurve if asked to do so
if sigclip:
worktimes, workmags, _ = sigclip_magseries(ftimes,
fmags,
ferrs,
sigclip=sigclip)
LOGINFO('ndet after sigclipping = %s' % len(worktimes))
else:
worktimes = ftimes
workmags = fmags
# bin the lightcurve if asked to do so
if timebin:
binned = time_bin_magseries(worktimes, workmags, binsize=timebin)
worktimes = binned['binnedtimes']
workmags = binned['binnedmags']
# renormalize the working mags to zero and scale them so that the
# variance = 1 for use with our LSP functions
normmags = (workmags - np.median(workmags))/np.std(workmags)
startf = 1.0/endp
endf = 1.0/startp
omegas = 2*np.pi*np.arange(startf, endf, stepsize)
# partition the omegas array by nworkers
tasks = []
chunksize = int(float(len(omegas))/nworkers) + 1
tasks = [omegas[x*chunksize:x*chunksize+chunksize]
for x in range(nworkers)]
# map to parallel workers
if (not nworkers) or (nworkers > NCPUS):
nworkers = NCPUS
LOGINFO('using %s workers...' % nworkers)
pool = Pool(nworkers)
tasks = [(worktimes, normmags, x) for x in tasks]
lsp = pool.map(parallel_scipylsp_worker, tasks)
pool.close()
pool.join()
lsp = np.concatenate(lsp)
periods = 2.0*np.pi/omegas
# find the nbestpeaks for the periodogram: 1. sort the lsp array by
# highest value first 2. go down the values until we find five values
# that are separated by at least periodepsilon in period
# make sure we only get finite lsp values
finitepeakind = npisfinite(lsp)
finlsp = lsp[finitepeakind]
finperiods = periods[finitepeakind]
bestperiodind = npargmax(finlsp)
sortedlspind = np.argsort(finlsp)[::-1]
sortedlspperiods = finperiods[sortedlspind]
sortedlspvals = finlsp[sortedlspind]
prevbestlspval = sortedlspvals[0]
# now get the nbestpeaks
nbestperiods, nbestlspvals, peakcount = (
[finperiods[bestperiodind]],
[finlsp[bestperiodind]],
1
)
prevperiod = sortedlspperiods[0]
# find the best nbestpeaks in the lsp and their periods
for period, lspval in zip(sortedlspperiods, sortedlspvals):
if peakcount == nbestpeaks:
break
perioddiff = abs(period - prevperiod)
bestperiodsdiff = [abs(period - x) for x in nbestperiods]
# print('prevperiod = %s, thisperiod = %s, '
# 'perioddiff = %s, peakcount = %s' %
# (prevperiod, period, perioddiff, peakcount))
# this ensures that this period is different from the last period
# and from all the other existing best periods by periodepsilon to
# make sure we jump to an entire different peak in the periodogram
if (perioddiff > periodepsilon and
all(x > periodepsilon for x in bestperiodsdiff)):
nbestperiods.append(period)
nbestlspvals.append(lspval)
peakcount = peakcount + 1
prevperiod = period
return {'bestperiod':finperiods[bestperiodind],
'bestlspval':finlsp[bestperiodind],
'nbestpeaks':nbestpeaks,
'nbestlspvals':nbestlspvals,
'nbestperiods':nbestperiods,
'lspvals':lsp,
'omegas':omegas,
'periods':periods,
'method':'sls'}
else:
LOGERROR('no good detections for these times and mags, skipping...')
return {'bestperiod':npnan,
'bestlspval':npnan,
'nbestpeaks':nbestpeaks,
'nbestlspvals':None,
'nbestperiods':None,
'lspvals':None,
'periods':None,
'method':'sls'}
|
def get_periodicfeatures(
pfpickle,
lcbasedir,
outdir,
fourierorder=5,
# these are depth, duration, ingress duration
transitparams=(-0.01,0.1,0.1),
# these are depth, duration, depth ratio, secphase
ebparams=(-0.2,0.3,0.7,0.5),
pdiff_threshold=1.0e-4,
sidereal_threshold=1.0e-4,
sampling_peak_multiplier=5.0,
sampling_startp=None,
sampling_endp=None,
starfeatures=None,
timecols=None,
magcols=None,
errcols=None,
lcformat='hat-sql',
lcformatdir=None,
sigclip=10.0,
verbose=True,
raiseonfail=False
):
'''This gets all periodic features for the object.
Parameters
----------
pfpickle : str
The period-finding result pickle containing period-finder results to use
for the calculation of LC fit, periodogram, and phased LC features.
lcbasedir : str
The base directory where the light curve for the current object is
located.
outdir : str
The output directory where the results will be written.
fourierorder : int
The Fourier order to use to generate sinusoidal function and fit that to
the phased light curve.
transitparams : list of floats
The transit depth, duration, and ingress duration to use to generate a
trapezoid planet transit model fit to the phased light curve. The period
used is the one provided in `period`, while the epoch is automatically
obtained from a spline fit to the phased light curve.
ebparams : list of floats
The primary eclipse depth, eclipse duration, the primary-secondary depth
ratio, and the phase of the secondary eclipse to use to generate an
eclipsing binary model fit to the phased light curve. The period used is
the one provided in `period`, while the epoch is automatically obtained
from a spline fit to the phased light curve.
pdiff_threshold : float
This is the max difference between periods to consider them the same.
sidereal_threshold : float
This is the max difference between any of the 'best' periods and the
sidereal day periods to consider them the same.
sampling_peak_multiplier : float
This is the minimum multiplicative factor of a 'best' period's
normalized periodogram peak over the sampling periodogram peak at the
same period required to accept the 'best' period as possibly real.
sampling_startp, sampling_endp : float
If the `pgramlist` doesn't have a time-sampling Lomb-Scargle
periodogram, it will be obtained automatically. Use these kwargs to
control the minimum and maximum period interval to be searched when
generating this periodogram.
starfeatures : str or None
If not None, this should be the filename of the
`starfeatures-<objectid>.pkl` created by
:py:func:`astrobase.lcproc.lcsfeatures.get_starfeatures` for this
object. This is used to get the neighbor's light curve and phase it with
this object's period to see if this object is blended.
timecols : list of str or None
The timecol keys to use from the lcdict in calculating the features.
magcols : list of str or None
The magcol keys to use from the lcdict in calculating the features.
errcols : list of str or None
The errcol keys to use from the lcdict in calculating the features.
lcformat : str
This is the `formatkey` associated with your light curve format, which
you previously passed in to the `lcproc.register_lcformat`
function. This will be used to look up how to find and read the light
curves specified in `basedir` or `use_list_of_filenames`.
lcformatdir : str or None
If this is provided, gives the path to a directory when you've stored
your lcformat description JSONs, other than the usual directories lcproc
knows to search for them in. Use this along with `lcformat` to specify
an LC format JSON file that's not currently registered with lcproc.
sigclip : float or int or sequence of two floats/ints or None
If a single float or int, a symmetric sigma-clip will be performed using
the number provided as the sigma-multiplier to cut out from the input
time-series.
If a list of two ints/floats is provided, the function will perform an
'asymmetric' sigma-clip. The first element in this list is the sigma
value to use for fainter flux/mag values; the second element in this
list is the sigma value to use for brighter flux/mag values. For
example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma
dimmings and greater than 3-sigma brightenings. Here the meaning of
"dimming" and "brightening" is set by *physics* (not the magnitude
system), which is why the `magsarefluxes` kwarg must be correctly set.
If `sigclip` is None, no sigma-clipping will be performed, and the
time-series (with non-finite elems removed) will be passed through to
the output.
verbose : bool
If True, will indicate progress while working.
raiseonfail : bool
If True, will raise an Exception if something goes wrong.
Returns
-------
str
Returns a filename for the output pickle containing all of the periodic
features for the input object's LC.
'''
try:
formatinfo = get_lcformat(lcformat,
use_lcformat_dir=lcformatdir)
if formatinfo:
(fileglob, readerfunc,
dtimecols, dmagcols, derrcols,
magsarefluxes, normfunc) = formatinfo
else:
LOGERROR("can't figure out the light curve format")
return None
except Exception as e:
LOGEXCEPTION("can't figure out the light curve format")
return None
# open the pfpickle
if pfpickle.endswith('.gz'):
infd = gzip.open(pfpickle)
else:
infd = open(pfpickle)
pf = pickle.load(infd)
infd.close()
lcfile = os.path.join(lcbasedir, pf['lcfbasename'])
objectid = pf['objectid']
if 'kwargs' in pf:
kwargs = pf['kwargs']
else:
kwargs = None
# override the default timecols, magcols, and errcols
# using the ones provided to the periodfinder
# if those don't exist, use the defaults from the lcformat def
if kwargs and 'timecols' in kwargs and timecols is None:
timecols = kwargs['timecols']
elif not kwargs and not timecols:
timecols = dtimecols
if kwargs and 'magcols' in kwargs and magcols is None:
magcols = kwargs['magcols']
elif not kwargs and not magcols:
magcols = dmagcols
if kwargs and 'errcols' in kwargs and errcols is None:
errcols = kwargs['errcols']
elif not kwargs and not errcols:
errcols = derrcols
# check if the light curve file exists
if not os.path.exists(lcfile):
LOGERROR("can't find LC %s for object %s" % (lcfile, objectid))
return None
# check if we have neighbors we can get the LCs for
if starfeatures is not None and os.path.exists(starfeatures):
with open(starfeatures,'rb') as infd:
starfeat = pickle.load(infd)
if starfeat['closestnbrlcfname'].size > 0:
nbr_full_lcf = starfeat['closestnbrlcfname'][0]
# check for this LC in the lcbasedir
if os.path.exists(os.path.join(lcbasedir,
os.path.basename(nbr_full_lcf))):
nbrlcf = os.path.join(lcbasedir,
os.path.basename(nbr_full_lcf))
# if it's not there, check for this file at the full LC location
elif os.path.exists(nbr_full_lcf):
nbrlcf = nbr_full_lcf
# otherwise, we can't find it, so complain
else:
LOGWARNING("can't find neighbor light curve file: %s in "
"its original directory: %s, or in this object's "
"lcbasedir: %s, skipping neighbor processing..." %
(os.path.basename(nbr_full_lcf),
os.path.dirname(nbr_full_lcf),
lcbasedir))
nbrlcf = None
else:
nbrlcf = None
else:
nbrlcf = None
# now, start processing for periodic feature extraction
try:
# get the object LC into a dict
lcdict = readerfunc(lcfile)
# this should handle lists/tuples being returned by readerfunc
# we assume that the first element is the actual lcdict
# FIXME: figure out how to not need this assumption
if ( (isinstance(lcdict, (list, tuple))) and
(isinstance(lcdict[0], dict)) ):
lcdict = lcdict[0]
# get the nbr object LC into a dict if there is one
if nbrlcf is not None:
nbrlcdict = readerfunc(nbrlcf)
# this should handle lists/tuples being returned by readerfunc
# we assume that the first element is the actual lcdict
# FIXME: figure out how to not need this assumption
if ( (isinstance(nbrlcdict, (list, tuple))) and
(isinstance(nbrlcdict[0], dict)) ):
nbrlcdict = nbrlcdict[0]
# this will be the output file
outfile = os.path.join(outdir, 'periodicfeatures-%s.pkl' %
squeeze(objectid).replace(' ','-'))
# normalize using the special function if specified
if normfunc is not None:
lcdict = normfunc(lcdict)
if nbrlcf:
nbrlcdict = normfunc(nbrlcdict)
resultdict = {}
for tcol, mcol, ecol in zip(timecols, magcols, errcols):
# dereference the columns and get them from the lcdict
if '.' in tcol:
tcolget = tcol.split('.')
else:
tcolget = [tcol]
times = _dict_get(lcdict, tcolget)
if nbrlcf:
nbrtimes = _dict_get(nbrlcdict, tcolget)
else:
nbrtimes = None
if '.' in mcol:
mcolget = mcol.split('.')
else:
mcolget = [mcol]
mags = _dict_get(lcdict, mcolget)
if nbrlcf:
nbrmags = _dict_get(nbrlcdict, mcolget)
else:
nbrmags = None
if '.' in ecol:
ecolget = ecol.split('.')
else:
ecolget = [ecol]
errs = _dict_get(lcdict, ecolget)
if nbrlcf:
nbrerrs = _dict_get(nbrlcdict, ecolget)
else:
nbrerrs = None
#
# filter out nans, etc. from the object and any neighbor LC
#
# get the finite values
finind = np.isfinite(times) & np.isfinite(mags) & np.isfinite(errs)
ftimes, fmags, ferrs = times[finind], mags[finind], errs[finind]
if nbrlcf:
nfinind = (np.isfinite(nbrtimes) &
np.isfinite(nbrmags) &
np.isfinite(nbrerrs))
nbrftimes, nbrfmags, nbrferrs = (nbrtimes[nfinind],
nbrmags[nfinind],
nbrerrs[nfinind])
# get nonzero errors
nzind = np.nonzero(ferrs)
ftimes, fmags, ferrs = ftimes[nzind], fmags[nzind], ferrs[nzind]
if nbrlcf:
nnzind = np.nonzero(nbrferrs)
nbrftimes, nbrfmags, nbrferrs = (nbrftimes[nnzind],
nbrfmags[nnzind],
nbrferrs[nnzind])
# normalize here if not using special normalization
if normfunc is None:
ntimes, nmags = normalize_magseries(
ftimes, fmags,
magsarefluxes=magsarefluxes
)
times, mags, errs = ntimes, nmags, ferrs
if nbrlcf:
nbrntimes, nbrnmags = normalize_magseries(
nbrftimes, nbrfmags,
magsarefluxes=magsarefluxes
)
nbrtimes, nbrmags, nbrerrs = nbrntimes, nbrnmags, nbrferrs
else:
nbrtimes, nbrmags, nbrerrs = None, None, None
else:
times, mags, errs = ftimes, fmags, ferrs
if times.size > 999:
#
# now we have times, mags, errs (and nbrtimes, nbrmags, nbrerrs)
#
available_pfmethods = []
available_pgrams = []
available_bestperiods = []
for k in pf[mcol].keys():
if k in PFMETHODS:
available_pgrams.append(pf[mcol][k])
if k != 'win':
available_pfmethods.append(
pf[mcol][k]['method']
)
available_bestperiods.append(
pf[mcol][k]['bestperiod']
)
#
# process periodic features for this magcol
#
featkey = 'periodicfeatures-%s' % mcol
resultdict[featkey] = {}
# first, handle the periodogram features
pgramfeat = periodicfeatures.periodogram_features(
available_pgrams, times, mags, errs,
sigclip=sigclip,
pdiff_threshold=pdiff_threshold,
sidereal_threshold=sidereal_threshold,
sampling_peak_multiplier=sampling_peak_multiplier,
sampling_startp=sampling_startp,
sampling_endp=sampling_endp,
verbose=verbose
)
resultdict[featkey].update(pgramfeat)
resultdict[featkey]['pfmethods'] = available_pfmethods
# then for each bestperiod, get phasedlc and lcfit features
for _ind, pfm, bp in zip(range(len(available_bestperiods)),
available_pfmethods,
available_bestperiods):
resultdict[featkey][pfm] = periodicfeatures.lcfit_features(
times, mags, errs, bp,
fourierorder=fourierorder,
transitparams=transitparams,
ebparams=ebparams,
sigclip=sigclip,
magsarefluxes=magsarefluxes,
verbose=verbose
)
phasedlcfeat = periodicfeatures.phasedlc_features(
times, mags, errs, bp,
nbrtimes=nbrtimes,
nbrmags=nbrmags,
nbrerrs=nbrerrs
)
resultdict[featkey][pfm].update(phasedlcfeat)
else:
LOGERROR('not enough finite measurements in magcol: %s, for '
'pfpickle: %s, skipping this magcol'
% (mcol, pfpickle))
featkey = 'periodicfeatures-%s' % mcol
resultdict[featkey] = None
#
# end of per magcol processing
#
# write resultdict to pickle
outfile = os.path.join(outdir, 'periodicfeatures-%s.pkl' %
squeeze(objectid).replace(' ','-'))
with open(outfile,'wb') as outfd:
pickle.dump(resultdict, outfd, pickle.HIGHEST_PROTOCOL)
return outfile
except Exception as e:
LOGEXCEPTION('failed to run for pf: %s, lcfile: %s' %
(pfpickle, lcfile))
if raiseonfail:
raise
else:
return None
|
def _periodicfeatures_worker(task):
'''
This is a parallel worker for the drivers below.
'''
pfpickle, lcbasedir, outdir, starfeatures, kwargs = task
try:
return get_periodicfeatures(pfpickle,
lcbasedir,
outdir,
starfeatures=starfeatures,
**kwargs)
except Exception as e:
LOGEXCEPTION('failed to get periodicfeatures for %s' % pfpickle)
|
def serial_periodicfeatures(pfpkl_list,
lcbasedir,
outdir,
starfeaturesdir=None,
fourierorder=5,
# these are depth, duration, ingress duration
transitparams=(-0.01,0.1,0.1),
# these are depth, duration, depth ratio, secphase
ebparams=(-0.2,0.3,0.7,0.5),
pdiff_threshold=1.0e-4,
sidereal_threshold=1.0e-4,
sampling_peak_multiplier=5.0,
sampling_startp=None,
sampling_endp=None,
starfeatures=None,
timecols=None,
magcols=None,
errcols=None,
lcformat='hat-sql',
lcformatdir=None,
sigclip=10.0,
verbose=False,
maxobjects=None):
'''This drives the periodicfeatures collection for a list of periodfinding
pickles.
Parameters
----------
pfpkl_list : list of str
The list of period-finding pickles to use.
lcbasedir : str
The base directory where the associated light curves are located.
outdir : str
The directory where the results will be written.
starfeaturesdir : str or None
The directory containing the `starfeatures-<objectid>.pkl` files for
each object to use calculate neighbor proximity light curve features.
fourierorder : int
The Fourier order to use to generate sinusoidal function and fit that to
the phased light curve.
transitparams : list of floats
The transit depth, duration, and ingress duration to use to generate a
trapezoid planet transit model fit to the phased light curve. The period
used is the one provided in `period`, while the epoch is automatically
obtained from a spline fit to the phased light curve.
ebparams : list of floats
The primary eclipse depth, eclipse duration, the primary-secondary depth
ratio, and the phase of the secondary eclipse to use to generate an
eclipsing binary model fit to the phased light curve. The period used is
the one provided in `period`, while the epoch is automatically obtained
from a spline fit to the phased light curve.
pdiff_threshold : float
This is the max difference between periods to consider them the same.
sidereal_threshold : float
This is the max difference between any of the 'best' periods and the
sidereal day periods to consider them the same.
sampling_peak_multiplier : float
This is the minimum multiplicative factor of a 'best' period's
normalized periodogram peak over the sampling periodogram peak at the
same period required to accept the 'best' period as possibly real.
sampling_startp, sampling_endp : float
If the `pgramlist` doesn't have a time-sampling Lomb-Scargle
periodogram, it will be obtained automatically. Use these kwargs to
control the minimum and maximum period interval to be searched when
generating this periodogram.
timecols : list of str or None
The timecol keys to use from the lcdict in calculating the features.
magcols : list of str or None
The magcol keys to use from the lcdict in calculating the features.
errcols : list of str or None
The errcol keys to use from the lcdict in calculating the features.
lcformat : str
This is the `formatkey` associated with your light curve format, which
you previously passed in to the `lcproc.register_lcformat`
function. This will be used to look up how to find and read the light
curves specified in `basedir` or `use_list_of_filenames`.
lcformatdir : str or None
If this is provided, gives the path to a directory when you've stored
your lcformat description JSONs, other than the usual directories lcproc
knows to search for them in. Use this along with `lcformat` to specify
an LC format JSON file that's not currently registered with lcproc.
sigclip : float or int or sequence of two floats/ints or None
If a single float or int, a symmetric sigma-clip will be performed using
the number provided as the sigma-multiplier to cut out from the input
time-series.
If a list of two ints/floats is provided, the function will perform an
'asymmetric' sigma-clip. The first element in this list is the sigma
value to use for fainter flux/mag values; the second element in this
list is the sigma value to use for brighter flux/mag values. For
example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma
dimmings and greater than 3-sigma brightenings. Here the meaning of
"dimming" and "brightening" is set by *physics* (not the magnitude
system), which is why the `magsarefluxes` kwarg must be correctly set.
If `sigclip` is None, no sigma-clipping will be performed, and the
time-series (with non-finite elems removed) will be passed through to
the output.
verbose : bool
If True, will indicate progress while working.
maxobjects : int
The total number of objects to process from `pfpkl_list`.
Returns
-------
Nothing.
'''
try:
formatinfo = get_lcformat(lcformat,
use_lcformat_dir=lcformatdir)
if formatinfo:
(fileglob, readerfunc,
dtimecols, dmagcols, derrcols,
magsarefluxes, normfunc) = formatinfo
else:
LOGERROR("can't figure out the light curve format")
return None
except Exception as e:
LOGEXCEPTION("can't figure out the light curve format")
return None
# make sure to make the output directory if it doesn't exist
if not os.path.exists(outdir):
os.makedirs(outdir)
if maxobjects:
pfpkl_list = pfpkl_list[:maxobjects]
LOGINFO('%s periodfinding pickles to process' % len(pfpkl_list))
# if the starfeaturedir is provided, try to find a starfeatures pickle for
# each periodfinding pickle in pfpkl_list
if starfeaturesdir and os.path.exists(starfeaturesdir):
starfeatures_list = []
LOGINFO('collecting starfeatures pickles...')
for pfpkl in pfpkl_list:
sfpkl1 = os.path.basename(pfpkl).replace('periodfinding',
'starfeatures')
sfpkl2 = sfpkl1.replace('.gz','')
sfpath1 = os.path.join(starfeaturesdir, sfpkl1)
sfpath2 = os.path.join(starfeaturesdir, sfpkl2)
if os.path.exists(sfpath1):
starfeatures_list.append(sfpkl1)
elif os.path.exists(sfpath2):
starfeatures_list.append(sfpkl2)
else:
starfeatures_list.append(None)
else:
starfeatures_list = [None for x in pfpkl_list]
# generate the task list
kwargs = {'fourierorder':fourierorder,
'transitparams':transitparams,
'ebparams':ebparams,
'pdiff_threshold':pdiff_threshold,
'sidereal_threshold':sidereal_threshold,
'sampling_peak_multiplier':sampling_peak_multiplier,
'sampling_startp':sampling_startp,
'sampling_endp':sampling_endp,
'timecols':timecols,
'magcols':magcols,
'errcols':errcols,
'lcformat':lcformat,
'lcformatdir':lcformatdir,
'sigclip':sigclip,
'verbose':verbose}
tasks = [(x, lcbasedir, outdir, y, kwargs) for (x,y) in
zip(pfpkl_list, starfeatures_list)]
LOGINFO('processing periodfinding pickles...')
for task in tqdm(tasks):
_periodicfeatures_worker(task)
|
def parallel_periodicfeatures(pfpkl_list,
lcbasedir,
outdir,
starfeaturesdir=None,
fourierorder=5,
# these are depth, duration, ingress duration
transitparams=(-0.01,0.1,0.1),
# these are depth, duration, depth ratio, secphase
ebparams=(-0.2,0.3,0.7,0.5),
pdiff_threshold=1.0e-4,
sidereal_threshold=1.0e-4,
sampling_peak_multiplier=5.0,
sampling_startp=None,
sampling_endp=None,
timecols=None,
magcols=None,
errcols=None,
lcformat='hat-sql',
lcformatdir=None,
sigclip=10.0,
verbose=False,
maxobjects=None,
nworkers=NCPUS):
'''This runs periodic feature generation in parallel for all periodfinding
pickles in the input list.
Parameters
----------
pfpkl_list : list of str
The list of period-finding pickles to use.
lcbasedir : str
The base directory where the associated light curves are located.
outdir : str
The directory where the results will be written.
starfeaturesdir : str or None
The directory containing the `starfeatures-<objectid>.pkl` files for
each object to use calculate neighbor proximity light curve features.
fourierorder : int
The Fourier order to use to generate sinusoidal function and fit that to
the phased light curve.
transitparams : list of floats
The transit depth, duration, and ingress duration to use to generate a
trapezoid planet transit model fit to the phased light curve. The period
used is the one provided in `period`, while the epoch is automatically
obtained from a spline fit to the phased light curve.
ebparams : list of floats
The primary eclipse depth, eclipse duration, the primary-secondary depth
ratio, and the phase of the secondary eclipse to use to generate an
eclipsing binary model fit to the phased light curve. The period used is
the one provided in `period`, while the epoch is automatically obtained
from a spline fit to the phased light curve.
pdiff_threshold : float
This is the max difference between periods to consider them the same.
sidereal_threshold : float
This is the max difference between any of the 'best' periods and the
sidereal day periods to consider them the same.
sampling_peak_multiplier : float
This is the minimum multiplicative factor of a 'best' period's
normalized periodogram peak over the sampling periodogram peak at the
same period required to accept the 'best' period as possibly real.
sampling_startp, sampling_endp : float
If the `pgramlist` doesn't have a time-sampling Lomb-Scargle
periodogram, it will be obtained automatically. Use these kwargs to
control the minimum and maximum period interval to be searched when
generating this periodogram.
timecols : list of str or None
The timecol keys to use from the lcdict in calculating the features.
magcols : list of str or None
The magcol keys to use from the lcdict in calculating the features.
errcols : list of str or None
The errcol keys to use from the lcdict in calculating the features.
lcformat : str
This is the `formatkey` associated with your light curve format, which
you previously passed in to the `lcproc.register_lcformat`
function. This will be used to look up how to find and read the light
curves specified in `basedir` or `use_list_of_filenames`.
lcformatdir : str or None
If this is provided, gives the path to a directory when you've stored
your lcformat description JSONs, other than the usual directories lcproc
knows to search for them in. Use this along with `lcformat` to specify
an LC format JSON file that's not currently registered with lcproc.
sigclip : float or int or sequence of two floats/ints or None
If a single float or int, a symmetric sigma-clip will be performed using
the number provided as the sigma-multiplier to cut out from the input
time-series.
If a list of two ints/floats is provided, the function will perform an
'asymmetric' sigma-clip. The first element in this list is the sigma
value to use for fainter flux/mag values; the second element in this
list is the sigma value to use for brighter flux/mag values. For
example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma
dimmings and greater than 3-sigma brightenings. Here the meaning of
"dimming" and "brightening" is set by *physics* (not the magnitude
system), which is why the `magsarefluxes` kwarg must be correctly set.
If `sigclip` is None, no sigma-clipping will be performed, and the
time-series (with non-finite elems removed) will be passed through to
the output.
verbose : bool
If True, will indicate progress while working.
maxobjects : int
The total number of objects to process from `pfpkl_list`.
nworkers : int
The number of parallel workers to launch to process the input.
Returns
-------
dict
A dict containing key: val pairs of the input period-finder result and
the output periodic feature result pickles for each input pickle is
returned.
'''
# make sure to make the output directory if it doesn't exist
if not os.path.exists(outdir):
os.makedirs(outdir)
if maxobjects:
pfpkl_list = pfpkl_list[:maxobjects]
LOGINFO('%s periodfinding pickles to process' % len(pfpkl_list))
# if the starfeaturedir is provided, try to find a starfeatures pickle for
# each periodfinding pickle in pfpkl_list
if starfeaturesdir and os.path.exists(starfeaturesdir):
starfeatures_list = []
LOGINFO('collecting starfeatures pickles...')
for pfpkl in pfpkl_list:
sfpkl1 = os.path.basename(pfpkl).replace('periodfinding',
'starfeatures')
sfpkl2 = sfpkl1.replace('.gz','')
sfpath1 = os.path.join(starfeaturesdir, sfpkl1)
sfpath2 = os.path.join(starfeaturesdir, sfpkl2)
if os.path.exists(sfpath1):
starfeatures_list.append(sfpkl1)
elif os.path.exists(sfpath2):
starfeatures_list.append(sfpkl2)
else:
starfeatures_list.append(None)
else:
starfeatures_list = [None for x in pfpkl_list]
# generate the task list
kwargs = {'fourierorder':fourierorder,
'transitparams':transitparams,
'ebparams':ebparams,
'pdiff_threshold':pdiff_threshold,
'sidereal_threshold':sidereal_threshold,
'sampling_peak_multiplier':sampling_peak_multiplier,
'sampling_startp':sampling_startp,
'sampling_endp':sampling_endp,
'timecols':timecols,
'magcols':magcols,
'errcols':errcols,
'lcformat':lcformat,
'lcformatdir':lcformat,
'sigclip':sigclip,
'verbose':verbose}
tasks = [(x, lcbasedir, outdir, y, kwargs) for (x,y) in
zip(pfpkl_list, starfeatures_list)]
LOGINFO('processing periodfinding pickles...')
with ProcessPoolExecutor(max_workers=nworkers) as executor:
resultfutures = executor.map(_periodicfeatures_worker, tasks)
results = [x for x in resultfutures]
resdict = {os.path.basename(x):y for (x,y) in zip(pfpkl_list, results)}
return resdict
|
def parallel_periodicfeatures_lcdir(
pfpkl_dir,
lcbasedir,
outdir,
pfpkl_glob='periodfinding-*.pkl*',
starfeaturesdir=None,
fourierorder=5,
# these are depth, duration, ingress duration
transitparams=(-0.01,0.1,0.1),
# these are depth, duration, depth ratio, secphase
ebparams=(-0.2,0.3,0.7,0.5),
pdiff_threshold=1.0e-4,
sidereal_threshold=1.0e-4,
sampling_peak_multiplier=5.0,
sampling_startp=None,
sampling_endp=None,
timecols=None,
magcols=None,
errcols=None,
lcformat='hat-sql',
lcformatdir=None,
sigclip=10.0,
verbose=False,
maxobjects=None,
nworkers=NCPUS,
recursive=True,
):
'''This runs parallel periodicfeature extraction for a directory of
periodfinding result pickles.
Parameters
----------
pfpkl_dir : str
The directory containing the pickles to process.
lcbasedir : str
The directory where all of the associated light curve files are located.
outdir : str
The directory where all the output will be written.
pfpkl_glob : str
The UNIX file glob to use to search for period-finder result pickles in
`pfpkl_dir`.
starfeaturesdir : str or None
The directory containing the `starfeatures-<objectid>.pkl` files for
each object to use calculate neighbor proximity light curve features.
fourierorder : int
The Fourier order to use to generate sinusoidal function and fit that to
the phased light curve.
transitparams : list of floats
The transit depth, duration, and ingress duration to use to generate a
trapezoid planet transit model fit to the phased light curve. The period
used is the one provided in `period`, while the epoch is automatically
obtained from a spline fit to the phased light curve.
ebparams : list of floats
The primary eclipse depth, eclipse duration, the primary-secondary depth
ratio, and the phase of the secondary eclipse to use to generate an
eclipsing binary model fit to the phased light curve. The period used is
the one provided in `period`, while the epoch is automatically obtained
from a spline fit to the phased light curve.
pdiff_threshold : float
This is the max difference between periods to consider them the same.
sidereal_threshold : float
This is the max difference between any of the 'best' periods and the
sidereal day periods to consider them the same.
sampling_peak_multiplier : float
This is the minimum multiplicative factor of a 'best' period's
normalized periodogram peak over the sampling periodogram peak at the
same period required to accept the 'best' period as possibly real.
sampling_startp, sampling_endp : float
If the `pgramlist` doesn't have a time-sampling Lomb-Scargle
periodogram, it will be obtained automatically. Use these kwargs to
control the minimum and maximum period interval to be searched when
generating this periodogram.
timecols : list of str or None
The timecol keys to use from the lcdict in calculating the features.
magcols : list of str or None
The magcol keys to use from the lcdict in calculating the features.
errcols : list of str or None
The errcol keys to use from the lcdict in calculating the features.
lcformat : str
This is the `formatkey` associated with your light curve format, which
you previously passed in to the `lcproc.register_lcformat`
function. This will be used to look up how to find and read the light
curves specified in `basedir` or `use_list_of_filenames`.
lcformatdir : str or None
If this is provided, gives the path to a directory when you've stored
your lcformat description JSONs, other than the usual directories lcproc
knows to search for them in. Use this along with `lcformat` to specify
an LC format JSON file that's not currently registered with lcproc.
sigclip : float or int or sequence of two floats/ints or None
If a single float or int, a symmetric sigma-clip will be performed using
the number provided as the sigma-multiplier to cut out from the input
time-series.
If a list of two ints/floats is provided, the function will perform an
'asymmetric' sigma-clip. The first element in this list is the sigma
value to use for fainter flux/mag values; the second element in this
list is the sigma value to use for brighter flux/mag values. For
example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma
dimmings and greater than 3-sigma brightenings. Here the meaning of
"dimming" and "brightening" is set by *physics* (not the magnitude
system), which is why the `magsarefluxes` kwarg must be correctly set.
If `sigclip` is None, no sigma-clipping will be performed, and the
time-series (with non-finite elems removed) will be passed through to
the output.
verbose : bool
If True, will indicate progress while working.
maxobjects : int
The total number of objects to process from `pfpkl_list`.
nworkers : int
The number of parallel workers to launch to process the input.
Returns
-------
dict
A dict containing key: val pairs of the input period-finder result and
the output periodic feature result pickles for each input pickle is
returned.
'''
try:
formatinfo = get_lcformat(lcformat,
use_lcformat_dir=lcformatdir)
if formatinfo:
(dfileglob, readerfunc,
dtimecols, dmagcols, derrcols,
magsarefluxes, normfunc) = formatinfo
else:
LOGERROR("can't figure out the light curve format")
return None
except Exception as e:
LOGEXCEPTION("can't figure out the light curve format")
return None
fileglob = pfpkl_glob
# now find the files
LOGINFO('searching for periodfinding pickles in %s ...' % pfpkl_dir)
if recursive is False:
matching = glob.glob(os.path.join(pfpkl_dir, fileglob))
else:
# use recursive glob for Python 3.5+
if sys.version_info[:2] > (3,4):
matching = glob.glob(os.path.join(pfpkl_dir,
'**',
fileglob),recursive=True)
# otherwise, use os.walk and glob
else:
# use os.walk to go through the directories
walker = os.walk(pfpkl_dir)
matching = []
for root, dirs, _files in walker:
for sdir in dirs:
searchpath = os.path.join(root,
sdir,
fileglob)
foundfiles = glob.glob(searchpath)
if foundfiles:
matching.extend(foundfiles)
# now that we have all the files, process them
if matching and len(matching) > 0:
LOGINFO('found %s periodfinding pickles, getting periodicfeatures...' %
len(matching))
return parallel_periodicfeatures(
matching,
lcbasedir,
outdir,
starfeaturesdir=starfeaturesdir,
fourierorder=fourierorder,
transitparams=transitparams,
ebparams=ebparams,
pdiff_threshold=pdiff_threshold,
sidereal_threshold=sidereal_threshold,
sampling_peak_multiplier=sampling_peak_multiplier,
sampling_startp=sampling_startp,
sampling_endp=sampling_endp,
timecols=timecols,
magcols=magcols,
errcols=errcols,
lcformat=lcformat,
lcformatdir=lcformatdir,
sigclip=sigclip,
verbose=verbose,
maxobjects=maxobjects,
nworkers=nworkers,
)
else:
LOGERROR('no periodfinding pickles found in %s' % (pfpkl_dir))
return None
|
def _parse_xmatch_catalog_header(xc, xk):
'''
This parses the header for a catalog file and returns it as a file object.
Parameters
----------
xc : str
The file name of an xmatch catalog prepared previously.
xk : list of str
This is a list of column names to extract from the xmatch catalog.
Returns
-------
tuple
The tuple returned is of the form::
(infd: the file object associated with the opened xmatch catalog,
catdefdict: a dict describing the catalog column definitions,
catcolinds: column number indices of the catalog,
catcoldtypes: the numpy dtypes of the catalog columns,
catcolnames: the names of each catalog column,
catcolunits: the units associated with each catalog column)
'''
catdef = []
# read in this catalog and transparently handle gzipped files
if xc.endswith('.gz'):
infd = gzip.open(xc,'rb')
else:
infd = open(xc,'rb')
# read in the defs
for line in infd:
if line.decode().startswith('#'):
catdef.append(
line.decode().replace('#','').strip().rstrip('\n')
)
if not line.decode().startswith('#'):
break
if not len(catdef) > 0:
LOGERROR("catalog definition not parseable "
"for catalog: %s, skipping..." % xc)
return None
catdef = ' '.join(catdef)
catdefdict = json.loads(catdef)
catdefkeys = [x['key'] for x in catdefdict['columns']]
catdefdtypes = [x['dtype'] for x in catdefdict['columns']]
catdefnames = [x['name'] for x in catdefdict['columns']]
catdefunits = [x['unit'] for x in catdefdict['columns']]
# get the correct column indices and dtypes for the requested columns
# from the catdefdict
catcolinds = []
catcoldtypes = []
catcolnames = []
catcolunits = []
for xkcol in xk:
if xkcol in catdefkeys:
xkcolind = catdefkeys.index(xkcol)
catcolinds.append(xkcolind)
catcoldtypes.append(catdefdtypes[xkcolind])
catcolnames.append(catdefnames[xkcolind])
catcolunits.append(catdefunits[xkcolind])
return (infd, catdefdict,
catcolinds, catcoldtypes, catcolnames, catcolunits)
|
def load_xmatch_external_catalogs(xmatchto, xmatchkeys, outfile=None):
'''This loads the external xmatch catalogs into a dict for use in an xmatch.
Parameters
----------
xmatchto : list of str
This is a list of paths to all the catalog text files that will be
loaded.
The text files must be 'CSVs' that use the '|' character as the
separator betwen columns. These files should all begin with a header in
JSON format on lines starting with the '#' character. this header will
define the catalog and contains the name of the catalog and the column
definitions. Column definitions must have the column name and the numpy
dtype of the columns (in the same format as that expected for the
numpy.genfromtxt function). Any line that does not begin with '#' is
assumed to be part of the columns in the catalog. An example is shown
below::
# {"name":"NSVS catalog of variable stars",
# "columns":[
# {"key":"objectid", "dtype":"U20", "name":"Object ID", "unit": null},
# {"key":"ra", "dtype":"f8", "name":"RA", "unit":"deg"},
# {"key":"decl","dtype":"f8", "name": "Declination", "unit":"deg"},
# {"key":"sdssr","dtype":"f8","name":"SDSS r", "unit":"mag"},
# {"key":"vartype","dtype":"U20","name":"Variable type", "unit":null}
# ],
# "colra":"ra",
# "coldec":"decl",
# "description":"Contains variable stars from the NSVS catalog"}
objectid1 | 45.0 | -20.0 | 12.0 | detached EB
objectid2 | 145.0 | 23.0 | 10.0 | RRab
objectid3 | 12.0 | 11.0 | 14.0 | Cepheid
.
.
.
xmatchkeys : list of lists
This is the list of lists of column names (as str) to get out of each
`xmatchto` catalog. This should be the same length as `xmatchto` and
each element here will apply to the respective file in `xmatchto`.
outfile : str or None
If this is not None, set this to the name of the pickle to write the
collected xmatch catalogs to. this pickle can then be loaded
transparently by the :py:func:`astrobase.checkplot.pkl.checkplot_dict`,
:py:func:`astrobase.checkplot.pkl.checkplot_pickle` functions to provide
xmatch info to the
:py:func:`astrobase.checkplot.pkl_xmatch.xmatch_external_catalogs`
function below.
If this is None, will return the loaded xmatch catalogs directly. This
will be a huge dict, so make sure you have enough RAM.
Returns
-------
str or dict
Based on the `outfile` kwarg, will either return the path to a collected
xmatch pickle file or the collected xmatch dict.
'''
outdict = {}
for xc, xk in zip(xmatchto, xmatchkeys):
parsed_catdef = _parse_xmatch_catalog_header(xc, xk)
if not parsed_catdef:
continue
(infd, catdefdict,
catcolinds, catcoldtypes,
catcolnames, catcolunits) = parsed_catdef
# get the specified columns out of the catalog
catarr = np.genfromtxt(infd,
usecols=catcolinds,
names=xk,
dtype=','.join(catcoldtypes),
comments='#',
delimiter='|',
autostrip=True)
infd.close()
catshortname = os.path.splitext(os.path.basename(xc))[0]
catshortname = catshortname.replace('.csv','')
#
# make a kdtree for this catalog
#
# get the ra and decl columns
objra, objdecl = (catarr[catdefdict['colra']],
catarr[catdefdict['coldec']])
# get the xyz unit vectors from ra,decl
cosdecl = np.cos(np.radians(objdecl))
sindecl = np.sin(np.radians(objdecl))
cosra = np.cos(np.radians(objra))
sinra = np.sin(np.radians(objra))
xyz = np.column_stack((cosra*cosdecl,sinra*cosdecl, sindecl))
# generate the kdtree
kdt = cKDTree(xyz,copy_data=True)
# generate the outdict element for this catalog
catoutdict = {'kdtree':kdt,
'data':catarr,
'columns':xk,
'colnames':catcolnames,
'colunits':catcolunits,
'name':catdefdict['name'],
'desc':catdefdict['description']}
outdict[catshortname] = catoutdict
if outfile is not None:
# if we're on OSX, we apparently need to save the file in chunks smaller
# than 2 GB to make it work right. can't load pickles larger than 4 GB
# either, but 3 GB < total size < 4 GB appears to be OK when loading.
# also see: https://bugs.python.org/issue24658.
# fix adopted from: https://stackoverflow.com/a/38003910
if sys.platform == 'darwin':
dumpbytes = pickle.dumps(outdict, protocol=pickle.HIGHEST_PROTOCOL)
max_bytes = 2**31 - 1
with open(outfile, 'wb') as outfd:
for idx in range(0, len(dumpbytes), max_bytes):
outfd.write(dumpbytes[idx:idx+max_bytes])
else:
with open(outfile, 'wb') as outfd:
pickle.dump(outdict, outfd, pickle.HIGHEST_PROTOCOL)
return outfile
else:
return outdict
|
def xmatch_external_catalogs(checkplotdict,
xmatchinfo,
xmatchradiusarcsec=2.0,
returndirect=False,
updatexmatch=True,
savepickle=None):
'''This matches the current object in the checkplotdict to all of the
external match catalogs specified.
Parameters
----------
checkplotdict : dict
This is a checkplotdict, generated by either the `checkplot_dict`
function, or read in from a `_read_checkplot_picklefile` function. This
must have a structure somewhat like the following, where the indicated
keys below are required::
{'objectid': the ID assigned to this object
'objectinfo': {'objectid': ID assigned to this object,
'ra': right ascension of the object in decimal deg,
'decl': declination of the object in decimal deg}}
xmatchinfo : str or dict
This is either the xmatch dict produced by the function
:py:func:`astrobase.checkplot.pkl_xmatch.load_xmatch_external_catalogs`
above, or the path to the xmatch info pickle file produced by that
function.
xmatchradiusarcsec : float
This is the cross-matching radius to use in arcseconds.
returndirect : bool
If this is True, will only return the xmatch results as a dict. If this
False, will return the checkplotdict with the xmatch results added in as
a key-val pair.
updatexmatch : bool
This function will look for an existing 'xmatch' key in the input
checkplotdict indicating that an xmatch has been performed before. If
`updatexmatch` is set to True, the xmatch results will be added onto
(e.g. when xmatching to additional catalogs after the first run). If
this is set to False, the xmatch key-val pair will be completely
overwritten.
savepickle : str or None
If this is None, it must be a path to where the updated checkplotdict
will be written to as a new checkplot pickle. If this is False, only the
updated checkplotdict is returned.
Returns
-------
dict or str
If `savepickle` is False, this returns a checkplotdict, with the xmatch
results added in. An 'xmatch' key will be added to this dict, with
something like the following dict as the value::
{'xmatchradiusarcsec':xmatchradiusarcsec,
'catalog1':{'name':'Catalog of interesting things',
'found':True,
'distarcsec':0.7,
'info':{'objectid':...,'ra':...,'decl':...,'desc':...}},
'catalog2':{'name':'Catalog of more interesting things',
'found':False,
'distarcsec':nan,
'info':None},
.
.
.
....}
This will contain the matches of the object in the input checkplotdict
to all of the catalogs provided in `xmatchinfo`.
If `savepickle` is True, will return the path to the saved checkplot
pickle file.
'''
# load the xmatch info
if isinstance(xmatchinfo, str) and os.path.exists(xmatchinfo):
with open(xmatchinfo,'rb') as infd:
xmatchdict = pickle.load(infd)
elif isinstance(xmatchinfo, dict):
xmatchdict = xmatchinfo
else:
LOGERROR("can't figure out xmatch info, can't xmatch, skipping...")
return checkplotdict
#
# generate the xmatch spec
#
# get our ra, decl
objra = checkplotdict['objectinfo']['ra']
objdecl = checkplotdict['objectinfo']['decl']
cosdecl = np.cos(np.radians(objdecl))
sindecl = np.sin(np.radians(objdecl))
cosra = np.cos(np.radians(objra))
sinra = np.sin(np.radians(objra))
objxyz = np.column_stack((cosra*cosdecl,
sinra*cosdecl,
sindecl))
# this is the search distance in xyz unit vectors
xyzdist = 2.0 * np.sin(np.radians(xmatchradiusarcsec/3600.0)/2.0)
#
# now search in each external catalog
#
xmatchresults = {}
extcats = sorted(list(xmatchdict.keys()))
for ecat in extcats:
# get the kdtree
kdt = xmatchdict[ecat]['kdtree']
# look up the coordinates
kdt_dist, kdt_ind = kdt.query(objxyz,
k=1,
distance_upper_bound=xyzdist)
# sort by matchdist
mdsorted = np.argsort(kdt_dist)
matchdists = kdt_dist[mdsorted]
matchinds = kdt_ind[mdsorted]
if matchdists[np.isfinite(matchdists)].size == 0:
xmatchresults[ecat] = {'name':xmatchdict[ecat]['name'],
'desc':xmatchdict[ecat]['desc'],
'found':False,
'distarcsec':None,
'info':None}
else:
for md, mi in zip(matchdists, matchinds):
if np.isfinite(md) and md < xyzdist:
infodict = {}
distarcsec = _xyzdist_to_distarcsec(md)
for col in xmatchdict[ecat]['columns']:
coldata = xmatchdict[ecat]['data'][col][mi]
if isinstance(coldata, str):
coldata = coldata.strip()
infodict[col] = coldata
xmatchresults[ecat] = {
'name':xmatchdict[ecat]['name'],
'desc':xmatchdict[ecat]['desc'],
'found':True,
'distarcsec':distarcsec,
'info':infodict,
'colkeys':xmatchdict[ecat]['columns'],
'colnames':xmatchdict[ecat]['colnames'],
'colunit':xmatchdict[ecat]['colunits'],
}
break
#
# should now have match results for all external catalogs
#
if returndirect:
return xmatchresults
else:
if updatexmatch and 'xmatch' in checkplotdict:
checkplotdict['xmatch'].update(xmatchresults)
else:
checkplotdict['xmatch'] = xmatchresults
if savepickle:
cpf = _write_checkplot_picklefile(checkplotdict,
outfile=savepickle,
protocol=4)
return cpf
else:
return checkplotdict
|
def angle_wrap(angle, radians=False):
'''Wraps the input angle to 360.0 degrees.
Parameters
----------
angle : float
The angle to wrap around 360.0 deg.
radians : bool
If True, will assume that the input is in radians. The output will then
also be in radians.
Returns
-------
float
Wrapped angle. If radians is True: input is assumed to be in radians,
output is also in radians.
'''
if radians:
wrapped = angle % (2.0*pi_value)
if wrapped < 0.0:
wrapped = 2.0*pi_value + wrapped
else:
wrapped = angle % 360.0
if wrapped < 0.0:
wrapped = 360.0 + wrapped
return wrapped
|
def decimal_to_dms(decimal_value):
'''Converts from decimal degrees (for declination coords) to DD:MM:SS.
Parameters
----------
decimal_value : float
A decimal value to convert to degrees, minutes, seconds sexagesimal
format.
Returns
-------
tuple
A four element tuple is returned: (sign, HH, MM, SS.ssss...)
'''
if decimal_value < 0:
negative = True
dec_val = fabs(decimal_value)
else:
negative = False
dec_val = decimal_value
degrees = trunc(dec_val)
minutes_deg = dec_val - degrees
minutes_mm = minutes_deg * 60.0
minutes_out = trunc(minutes_mm)
seconds = (minutes_mm - minutes_out)*60.0
if negative:
degrees = degrees
return '-', degrees, minutes_out, seconds
else:
return '+', degrees, minutes_out, seconds
|
def decimal_to_hms(decimal_value):
'''Converts from decimal degrees (for RA coords) to HH:MM:SS.
Parameters
----------
decimal_value : float
A decimal value to convert to hours, minutes, seconds. Negative values
will be wrapped around 360.0.
Returns
-------
tuple
A three element tuple is returned: (HH, MM, SS.ssss...)
'''
# wrap to 360.0
if decimal_value < 0:
dec_wrapped = 360.0 + decimal_value
else:
dec_wrapped = decimal_value
# convert to decimal hours first
dec_hours = dec_wrapped/15.0
if dec_hours < 0:
negative = True
dec_val = fabs(dec_hours)
else:
negative = False
dec_val = dec_hours
hours = trunc(dec_val)
minutes_hrs = dec_val - hours
minutes_mm = minutes_hrs * 60.0
minutes_out = trunc(minutes_mm)
seconds = (minutes_mm - minutes_out)*60.0
if negative:
hours = -hours
return hours, minutes_out, seconds
else:
return hours, minutes_out, seconds
|
def hms_str_to_tuple(hms_string):
'''Converts a string of the form HH:MM:SS or HH MM SS to a tuple of the form
(HH, MM, SS).
Parameters
----------
hms_string : str
A RA coordinate string of the form 'HH:MM:SS.sss' or 'HH MM SS.sss'.
Returns
-------
tuple
A three element tuple is returned (HH, MM, SS.ssss...)
'''
if ':' in hms_string:
separator = ':'
else:
separator = ' '
hh, mm, ss = hms_string.split(separator)
return int(hh), int(mm), float(ss)
|
def dms_str_to_tuple(dms_string):
'''Converts a string of the form [+-]DD:MM:SS or [+-]DD MM SS to a tuple of
the form (sign, DD, MM, SS).
Parameters
----------
dms_string : str
A declination coordinate string of the form '[+-]DD:MM:SS.sss' or
'[+-]DD MM SS.sss'. The sign in front of DD is optional. If it's not
there, this function will assume that the coordinate string is a
positive value.
Returns
-------
tuple
A four element tuple of the form: (sign, DD, MM, SS.ssss...).
'''
if ':' in dms_string:
separator = ':'
else:
separator = ' '
sign_dd, mm, ss = dms_string.split(separator)
if sign_dd.startswith('+') or sign_dd.startswith('-'):
sign, dd = sign_dd[0], sign_dd[1:]
else:
sign, dd = '+', sign_dd
return sign, int(dd), int(mm), float(ss)
|
def hms_to_decimal(hours, minutes, seconds, returndeg=True):
'''Converts from HH, MM, SS to a decimal value.
Parameters
----------
hours : int
The HH part of a RA coordinate.
minutes : int
The MM part of a RA coordinate.
seconds : float
The SS.sss part of a RA coordinate.
returndeg : bool
If this is True, then will return decimal degrees as the output.
If this is False, then will return decimal HOURS as the output.
Decimal hours are sometimes used in FITS headers.
Returns
-------
float
The right ascension value in either decimal degrees or decimal hours
depending on `returndeg`.
'''
if hours > 24:
return None
else:
dec_hours = fabs(hours) + fabs(minutes)/60.0 + fabs(seconds)/3600.0
if returndeg:
dec_deg = dec_hours*15.0
if dec_deg < 0:
dec_deg = dec_deg + 360.0
dec_deg = dec_deg % 360.0
return dec_deg
else:
return dec_hours
|
def dms_to_decimal(sign, degrees, minutes, seconds):
'''Converts from DD:MM:SS to a decimal value.
Parameters
----------
sign : {'+', '-', ''}
The sign part of a Dec coordinate.
degrees : int
The DD part of a Dec coordinate.
minutes : int
The MM part of a Dec coordinate.
seconds : float
The SS.sss part of a Dec coordinate.
Returns
-------
float
The declination value in decimal degrees.
'''
dec_deg = fabs(degrees) + fabs(minutes)/60.0 + fabs(seconds)/3600.0
if sign == '-':
return -dec_deg
else:
return dec_deg
|
def great_circle_dist(ra1, dec1, ra2, dec2):
'''Calculates the great circle angular distance between two coords.
This calculates the great circle angular distance in arcseconds between two
coordinates (ra1,dec1) and (ra2,dec2). This is basically a clone of GCIRC
from the IDL Astrolib.
Parameters
----------
ra1,dec1 : float or array-like
The first coordinate's right ascension and declination value(s) in
decimal degrees.
ra2,dec2 : float or array-like
The second coordinate's right ascension and declination value(s) in
decimal degrees.
Returns
-------
float or array-like
Great circle distance between the two coordinates in arseconds.
Notes
-----
If (`ra1`, `dec1`) is scalar and (`ra2`, `dec2`) is scalar: the result is a
float distance in arcseconds.
If (`ra1`, `dec1`) is scalar and (`ra2`, `dec2`) is array-like: the result
is an np.array with distance in arcseconds between (`ra1`, `dec1`) and each
element of (`ra2`, `dec2`).
If (`ra1`, `dec1`) is array-like and (`ra2`, `dec2`) is scalar: the result
is an np.array with distance in arcseconds between (`ra2`, `dec2`) and each
element of (`ra1`, `dec1`).
If (`ra1`, `dec1`) and (`ra2`, `dec2`) are both array-like: the result is an
np.array with the pair-wise distance in arcseconds between each element of
the two coordinate lists. In this case, if the input array-likes are not the
same length, then excess elements of the longer one will be ignored.
'''
# wrap RA if negative or larger than 360.0 deg
in_ra1 = ra1 % 360.0
in_ra1 = in_ra1 + 360.0*(in_ra1 < 0.0)
in_ra2 = ra2 % 360.0
in_ra2 = in_ra2 + 360.0*(in_ra1 < 0.0)
# convert to radians
ra1_rad, dec1_rad = np.deg2rad(in_ra1), np.deg2rad(dec1)
ra2_rad, dec2_rad = np.deg2rad(in_ra2), np.deg2rad(dec2)
del_dec2 = (dec2_rad - dec1_rad)/2.0
del_ra2 = (ra2_rad - ra1_rad)/2.0
sin_dist = np.sqrt(np.sin(del_dec2) * np.sin(del_dec2) +
np.cos(dec1_rad) * np.cos(dec2_rad) *
np.sin(del_ra2) * np.sin(del_ra2))
dist_rad = 2.0 * np.arcsin(sin_dist)
# return the distance in arcseconds
return np.rad2deg(dist_rad)*3600.0
|
def xmatch_basic(ra1, dec1, ra2, dec2, match_radius=5.0):
'''Finds the closest object in (`ra2`, `dec2`) to scalar coordinate pair
(`ra1`, `dec1`) and returns the distance in arcseconds.
This is a quick matcher that uses the `great_circle_dist` function to find
the closest object in (`ra2`, `dec2`) within `match_radius` arcseconds to
(`ra1`, `dec1`). (`ra1`, `dec1`) must be a scalar pair, while
(`ra2`, `dec2`) must be array-likes of the same lengths.
Parameters
----------
ra1,dec1 : float
Coordinate of the object to find matches to. In decimal degrees.
ra2,dec2 : array-like
The coordinates that will be searched for matches. In decimal degrees.
match_radius : float
The match radius in arcseconds to use for the match.
Returns
-------
tuple
A two element tuple like the following::
(True -> no match found or False -> found a match,
minimum distance between target and list in arcseconds)
'''
min_dist_arcsec = np.min(great_circle_dist(ra1,dec1,ra2,dec2))
if (min_dist_arcsec < match_radius):
return (True,min_dist_arcsec)
else:
return (False,min_dist_arcsec)
|
def xmatch_neighbors(ra1, dec1,
ra2, dec2,
match_radius=60.0,
includeself=False,
sortresults=True):
'''Finds the closest objects in (`ra2`, `dec2`) to scalar coordinate pair
(`ra1`, `dec1`) and returns the indices of the objects that match.
This is a quick matcher that uses the `great_circle_dist` function to find
the closest object in (`ra2`, `dec2`) within `match_radius` arcseconds to
(`ra1`, `dec1`). (`ra1`, `dec1`) must be a scalar pair, while
(`ra2`, `dec2`) must be array-likes of the same lengths.
Parameters
----------
ra1,dec1 : float
Coordinate of the object to find matches to. In decimal degrees.
ra2,dec2 : array-like
The coordinates that will be searched for matches. In decimal degrees.
match_radius : float
The match radius in arcseconds to use for the match.
includeself : bool
If this is True, the object itself will be included in the match
results.
sortresults : bool
If this is True, the match indices will be sorted by distance.
Returns
-------
tuple
A tuple like the following is returned::
(True -> matches found or False -> no matches found,
minimum distance between target and list,
np.array of indices where list of coordinates is
closer than `match_radius` arcseconds from the target,
np.array of distances in arcseconds)
'''
dist = great_circle_dist(ra1,dec1,ra2,dec2)
if includeself:
match_dist_ind = np.where(dist < match_radius)
else:
# make sure we match only objects that are not the same as this object
match_dist_ind = np.where((dist < match_radius) & (dist > 0.1))
if len(match_dist_ind) > 0:
match_dists = dist[match_dist_ind]
dist_sort_ind = np.argsort(match_dists)
if sortresults:
match_dist_ind = (match_dist_ind[0])[dist_sort_ind]
min_dist = np.min(match_dists)
return (True,min_dist,match_dist_ind,match_dists[dist_sort_ind])
else:
return (False,)
|
def make_kdtree(ra, decl):
'''This makes a `scipy.spatial.CKDTree` on (`ra`, `decl`).
Parameters
----------
ra,decl : array-like
The right ascension and declination coordinate pairs in decimal degrees.
Returns
-------
`scipy.spatial.CKDTree`
The cKDTRee object generated by this function is returned and can be
used to run various spatial queries.
'''
# get the xyz unit vectors from ra,decl
# since i had to remind myself:
# https://en.wikipedia.org/wiki/Equatorial_coordinate_system
cosdecl = np.cos(np.radians(decl))
sindecl = np.sin(np.radians(decl))
cosra = np.cos(np.radians(ra))
sinra = np.sin(np.radians(ra))
xyz = np.column_stack((cosra*cosdecl,sinra*cosdecl, sindecl))
# generate the kdtree
kdt = sps.cKDTree(xyz,copy_data=True)
return kdt
|
def conesearch_kdtree(kdtree,
racenter,
declcenter,
searchradiusdeg,
conesearchworkers=1):
'''This does a cone-search around (`racenter`, `declcenter`) in `kdtree`.
Parameters
----------
kdtree : scipy.spatial.CKDTree
This is a kdtree object generated by the `make_kdtree` function.
racenter,declcenter : float or array-like
This is the center coordinate to run the cone-search around in decimal
degrees. If this is an np.array, will search for all coordinate pairs in
the array.
searchradiusdeg : float
The search radius to use for the cone-search in decimal degrees.
conesearchworkers : int
The number of parallel workers to launch for the cone-search.
Returns
-------
list or np.array of lists
If (`racenter`, `declcenter`) is a single coordinate, this will return a
list of the indices of the matching objects in the kdtree. If
(`racenter`, `declcenter`) are array-likes, this will return an object
array containing lists of matching object indices for each coordinate
searched.
'''
cosdecl = np.cos(np.radians(declcenter))
sindecl = np.sin(np.radians(declcenter))
cosra = np.cos(np.radians(racenter))
sinra = np.sin(np.radians(racenter))
# this is the search distance in xyz unit vectors
xyzdist = 2.0 * np.sin(np.radians(searchradiusdeg)/2.0)
# look up the coordinates
kdtindices = kdtree.query_ball_point([cosra*cosdecl,
sinra*cosdecl,
sindecl],
xyzdist,
n_jobs=conesearchworkers)
return kdtindices
|
def xmatch_kdtree(kdtree,
extra, extdecl,
xmatchdistdeg,
closestonly=True):
'''This cross-matches between `kdtree` and (`extra`, `extdecl`) arrays.
Returns the indices of the kdtree and the indices of extra, extdecl that
xmatch successfully.
Parameters
----------
kdtree : scipy.spatial.CKDTree
This is a kdtree object generated by the `make_kdtree` function.
extra,extdecl : array-like
These are np.arrays of 'external' coordinates in decimal degrees that
will be cross-matched against the objects in `kdtree`.
xmatchdistdeg : float
The match radius to use for the cross-match in decimal degrees.
closestonly : bool
If closestonly is True, then this function returns only the closest
matching indices in (extra, extdecl) for each object in kdtree if there
are any matches. Otherwise, it returns a list of indices in (extra,
extdecl) for all matches within xmatchdistdeg between kdtree and (extra,
extdecl).
Returns
-------
tuple of lists
Returns a tuple of the form::
(list of `kdtree` indices matching to external objects,
list of all `extra`/`extdecl` indices that match to each
element in `kdtree` within the specified cross-match distance)
'''
ext_cosdecl = np.cos(np.radians(extdecl))
ext_sindecl = np.sin(np.radians(extdecl))
ext_cosra = np.cos(np.radians(extra))
ext_sinra = np.sin(np.radians(extra))
ext_xyz = np.column_stack((ext_cosra*ext_cosdecl,
ext_sinra*ext_cosdecl,
ext_sindecl))
ext_xyzdist = 2.0 * np.sin(np.radians(xmatchdistdeg)/2.0)
# get our kdtree
our_kdt = kdtree
# get the external kdtree
ext_kdt = sps.cKDTree(ext_xyz)
# do a query_ball_tree
extkd_matchinds = our_kdt.query_ball_tree(ext_kdt, ext_xyzdist)
ext_matchinds = []
kdt_matchinds = []
for extind, mind in enumerate(extkd_matchinds):
if len(mind) > 0:
# our object indices
kdt_matchinds.append(extind)
# external object indices
if closestonly:
ext_matchinds.append(mind[0])
else:
ext_matchinds.append(mind)
return kdt_matchinds, ext_matchinds
|
def total_proper_motion(pmra, pmdecl, decl):
'''This calculates the total proper motion of an object.
Parameters
----------
pmra : float or array-like
The proper motion(s) in right ascension, measured in mas/yr.
pmdecl : float or array-like
The proper motion(s) in declination, measured in mas/yr.
decl : float or array-like
The declination of the object(s) in decimal degrees.
Returns
-------
float or array-like
The total proper motion(s) of the object(s) in mas/yr.
'''
pm = np.sqrt( pmdecl*pmdecl + pmra*pmra*np.cos(np.radians(decl)) *
np.cos(np.radians(decl)) )
return pm
|
def equatorial_to_galactic(ra, decl, equinox='J2000'):
'''This converts from equatorial coords to galactic coords.
Parameters
----------
ra : float or array-like
Right ascension values(s) in decimal degrees.
decl : float or array-like
Declination value(s) in decimal degrees.
equinox : str
The equinox that the coordinates are measured at. This must be
recognizable by Astropy's `SkyCoord` class.
Returns
-------
tuple of (float, float) or tuple of (np.array, np.array)
The galactic coordinates (l, b) for each element of the input
(`ra`, `decl`).
'''
# convert the ra/decl to gl, gb
radecl = SkyCoord(ra=ra*u.degree, dec=decl*u.degree, equinox=equinox)
gl = radecl.galactic.l.degree
gb = radecl.galactic.b.degree
return gl, gb
|
def galactic_to_equatorial(gl, gb):
'''This converts from galactic coords to equatorial coordinates.
Parameters
----------
gl : float or array-like
Galactic longitude values(s) in decimal degrees.
gb : float or array-like
Galactic latitude value(s) in decimal degrees.
Returns
-------
tuple of (float, float) or tuple of (np.array, np.array)
The equatorial coordinates (RA, DEC) for each element of the input
(`gl`, `gb`) in decimal degrees. These are reported in the ICRS frame.
'''
gal = SkyCoord(gl*u.degree, gl*u.degree, frame='galactic')
transformed = gal.transform_to('icrs')
return transformed.ra.degree, transformed.dec.degree
|
def xieta_from_radecl(inra, indecl,
incenterra, incenterdecl,
deg=True):
'''This returns the image-plane projected xi-eta coords for inra, indecl.
Parameters
----------
inra,indecl : array-like
The equatorial coordinates to get the xi, eta coordinates for in decimal
degrees or radians.
incenterra,incenterdecl : float
The center coordinate values to use to calculate the plane-projected
coordinates around.
deg : bool
If this is True, the input angles are assumed to be in degrees and the
output is in degrees as well.
Returns
-------
tuple of np.arrays
This is the (`xi`, `eta`) coordinate pairs corresponding to the
image-plane projected coordinates for each pair of input equatorial
coordinates in (`inra`, `indecl`).
'''
if deg:
ra = np.radians(inra)
decl = np.radians(indecl)
centerra = np.radians(incenterra)
centerdecl = np.radians(incenterdecl)
else:
ra = inra
decl = indecl
centerra = incenterra
centerdecl = incenterdecl
cdecc = np.cos(centerdecl)
sdecc = np.sin(centerdecl)
crac = np.cos(centerra)
srac = np.sin(centerra)
uu = np.cos(decl)*np.cos(ra)
vv = np.cos(decl)*np.sin(ra)
ww = np.sin(decl)
uun = uu*cdecc*crac + vv*cdecc*srac + ww*sdecc
vvn = -uu*srac + vv*crac
wwn = -uu*sdecc*crac - vv*sdecc*srac + ww*cdecc
denom = vvn*vvn + wwn*wwn
aunn = np.zeros_like(uun)
aunn[uun >= 1.0] = 0.0
aunn[uun < 1.0] = np.arccos(uun)
xi, eta = np.zeros_like(aunn), np.zeros_like(aunn)
xi[(aunn <= 0.0) | (denom <= 0.0)] = 0.0
eta[(aunn <= 0.0) | (denom <= 0.0)] = 0.0
sdenom = np.sqrt(denom)
xi[(aunn > 0.0) | (denom > 0.0)] = aunn*vvn/sdenom
eta[(aunn > 0.0) | (denom > 0.0)] = aunn*wwn/sdenom
if deg:
return np.degrees(xi), np.degrees(eta)
else:
return xi, eta
|
def generate_transit_lightcurve(
times,
mags=None,
errs=None,
paramdists={'transitperiod':sps.uniform(loc=0.1,scale=49.9),
'transitdepth':sps.uniform(loc=1.0e-4,scale=2.0e-2),
'transitduration':sps.uniform(loc=0.01,scale=0.29)},
magsarefluxes=False,
):
'''This generates fake planet transit light curves.
Parameters
----------
times : np.array
This is an array of time values that will be used as the time base.
mags,errs : np.array
These arrays will have the model added to them. If either is
None, `np.full_like(times, 0.0)` will used as a substitute and the model
light curve will be centered around 0.0.
paramdists : dict
This is a dict containing parameter distributions to use for the
model params, containing the following keys ::
{'transitperiod', 'transitdepth', 'transitduration'}
The values of these keys should all be 'frozen' scipy.stats distribution
objects, e.g.:
https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions
The variability epoch will be automatically chosen from a uniform
distribution between `times.min()` and `times.max()`.
The ingress duration will be automatically chosen from a uniform
distribution ranging from 0.05 to 0.5 of the transitduration.
The transitdepth will be flipped automatically as appropriate if
`magsarefluxes=True`.
magsarefluxes : bool
If the generated time series is meant to be a flux time-series, set this
to True to get the correct sign of variability amplitude.
Returns
-------
dict
A dict of the form below is returned::
{'vartype': 'planet',
'params': {'transitperiod': generated value of period,
'transitepoch': generated value of epoch,
'transitdepth': generated value of transit depth,
'transitduration': generated value of transit duration,
'ingressduration': generated value of transit ingress
duration},
'times': the model times,
'mags': the model mags,
'errs': the model errs,
'varperiod': the generated period of variability == 'transitperiod'
'varamplitude': the generated amplitude of
variability == 'transitdepth'}
'''
if mags is None:
mags = np.full_like(times, 0.0)
if errs is None:
errs = np.full_like(times, 0.0)
# choose the epoch
epoch = npr.random()*(times.max() - times.min()) + times.min()
# choose the period, depth, duration
period = paramdists['transitperiod'].rvs(size=1)
depth = paramdists['transitdepth'].rvs(size=1)
duration = paramdists['transitduration'].rvs(size=1)
# figure out the ingress duration
ingduration = npr.random()*(0.5*duration - 0.05*duration) + 0.05*duration
# fix the transit depth if it needs to be flipped
if magsarefluxes and depth < 0.0:
depth = -depth
elif not magsarefluxes and depth > 0.0:
depth = -depth
# generate the model
modelmags, phase, ptimes, pmags, perrs = (
transits.trapezoid_transit_func([period, epoch, depth,
duration, ingduration],
times,
mags,
errs)
)
# resort in original time order
timeind = np.argsort(ptimes)
mtimes = ptimes[timeind]
mmags = modelmags[timeind]
merrs = perrs[timeind]
# return a dict with everything
modeldict = {
'vartype':'planet',
'params':{x:np.asscalar(y) for x,y in zip(['transitperiod',
'transitepoch',
'transitdepth',
'transitduration',
'ingressduration'],
[period,
epoch,
depth,
duration,
ingduration])},
'times':mtimes,
'mags':mmags,
'errs':merrs,
# these are standard keys that help with later characterization of
# variability as a function period, variability amplitude, object mag,
# ndet, etc.
'varperiod':period,
'varamplitude':depth
}
return modeldict
|
def generate_eb_lightcurve(
times,
mags=None,
errs=None,
paramdists={'period':sps.uniform(loc=0.2,scale=99.8),
'pdepth':sps.uniform(loc=1.0e-4,scale=0.7),
'pduration':sps.uniform(loc=0.01,scale=0.44),
'depthratio':sps.uniform(loc=0.01,scale=0.99),
'secphase':sps.norm(loc=0.5,scale=0.1)},
magsarefluxes=False,
):
'''This generates fake EB light curves.
Parameters
----------
times : np.array
This is an array of time values that will be used as the time base.
mags,errs : np.array
These arrays will have the model added to them. If either is
None, `np.full_like(times, 0.0)` will used as a substitute and the model
light curve will be centered around 0.0.
paramdists : dict
This is a dict containing parameter distributions to use for the
model params, containing the following keys ::
{'period', 'pdepth', 'pduration', 'depthratio', 'secphase'}
The values of these keys should all be 'frozen' scipy.stats distribution
objects, e.g.:
https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions
The variability epoch will be automatically chosen from a uniform
distribution between `times.min()` and `times.max()`.
The `pdepth` will be flipped automatically as appropriate if
`magsarefluxes=True`.
magsarefluxes : bool
If the generated time series is meant to be a flux time-series, set this
to True to get the correct sign of variability amplitude.
Returns
-------
dict
A dict of the form below is returned::
{'vartype': 'EB',
'params': {'period': generated value of period,
'epoch': generated value of epoch,
'pdepth': generated value of priary eclipse depth,
'pduration': generated value of prim eclipse duration,
'depthratio': generated value of prim/sec eclipse
depth ratio},
'times': the model times,
'mags': the model mags,
'errs': the model errs,
'varperiod': the generated period of variability == 'period'
'varamplitude': the generated amplitude of
variability == 'pdepth'}
'''
if mags is None:
mags = np.full_like(times, 0.0)
if errs is None:
errs = np.full_like(times, 0.0)
# choose the epoch
epoch = npr.random()*(times.max() - times.min()) + times.min()
# choose the period, pdepth, duration, depthratio
period = paramdists['period'].rvs(size=1)
pdepth = paramdists['pdepth'].rvs(size=1)
pduration = paramdists['pduration'].rvs(size=1)
depthratio = paramdists['depthratio'].rvs(size=1)
secphase = paramdists['secphase'].rvs(size=1)
# fix the transit depth if it needs to be flipped
if magsarefluxes and pdepth < 0.0:
pdepth = -pdepth
elif not magsarefluxes and pdepth > 0.0:
pdepth = -pdepth
# generate the model
modelmags, phase, ptimes, pmags, perrs = (
eclipses.invgauss_eclipses_func([period, epoch, pdepth,
pduration, depthratio, secphase],
times,
mags,
errs)
)
# resort in original time order
timeind = np.argsort(ptimes)
mtimes = ptimes[timeind]
mmags = modelmags[timeind]
merrs = perrs[timeind]
# return a dict with everything
modeldict = {
'vartype':'EB',
'params':{x:np.asscalar(y) for x,y in zip(['period',
'epoch',
'pdepth',
'pduration',
'depthratio'],
[period,
epoch,
pdepth,
pduration,
depthratio])},
'times':mtimes,
'mags':mmags,
'errs':merrs,
'varperiod':period,
'varamplitude':pdepth,
}
return modeldict
|
def generate_flare_lightcurve(
times,
mags=None,
errs=None,
paramdists={
# flare peak amplitude from 0.01 mag to 1.0 mag above median. this
# is tuned for redder bands, flares are much stronger in bluer
# bands, so tune appropriately for your situation.
'amplitude':sps.uniform(loc=0.01,scale=0.99),
# up to 5 flares per LC and at least 1
'nflares':[1,5],
# 10 minutes to 1 hour for rise stdev
'risestdev':sps.uniform(loc=0.007, scale=0.04),
# 1 hour to 4 hours for decay time constant
'decayconst':sps.uniform(loc=0.04, scale=0.163)
},
magsarefluxes=False,
):
'''This generates fake flare light curves.
Parameters
----------
times : np.array
This is an array of time values that will be used as the time base.
mags,errs : np.array
These arrays will have the model added to them. If either is
None, `np.full_like(times, 0.0)` will used as a substitute and the model
light curve will be centered around 0.0.
paramdists : dict
This is a dict containing parameter distributions to use for the
model params, containing the following keys ::
{'amplitude', 'nflares', 'risestdev', 'decayconst'}
The values of these keys should all be 'frozen' scipy.stats distribution
objects, e.g.:
https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions
The `flare_peak_time` for each flare will be generated automatically
between `times.min()` and `times.max()` using a uniform distribution.
The `amplitude` will be flipped automatically as appropriate if
`magsarefluxes=True`.
magsarefluxes : bool
If the generated time series is meant to be a flux time-series, set this
to True to get the correct sign of variability amplitude.
Returns
-------
dict
A dict of the form below is returned::
{'vartype': 'flare',
'params': {'amplitude': generated value of flare amplitudes,
'nflares': generated value of number of flares,
'risestdev': generated value of stdev of rise time,
'decayconst': generated value of decay constant,
'peaktime': generated value of flare peak time},
'times': the model times,
'mags': the model mags,
'errs': the model errs,
'varamplitude': the generated amplitude of
variability == 'amplitude'}
'''
if mags is None:
mags = np.full_like(times, 0.0)
if errs is None:
errs = np.full_like(times, 0.0)
nflares = npr.randint(paramdists['nflares'][0],
high=paramdists['nflares'][1])
# generate random flare peak times based on the number of flares
flarepeaktimes = (
npr.random(
size=nflares
)*(times.max() - times.min()) + times.min()
)
# now add the flares to the time-series
params = {'nflares':nflares}
for flareind, peaktime in zip(range(nflares), flarepeaktimes):
# choose the amplitude, rise stdev and decay time constant
amp = paramdists['amplitude'].rvs(size=1)
risestdev = paramdists['risestdev'].rvs(size=1)
decayconst = paramdists['decayconst'].rvs(size=1)
# fix the transit depth if it needs to be flipped
if magsarefluxes and amp < 0.0:
amp = -amp
elif not magsarefluxes and amp > 0.0:
amp = -amp
# add this flare to the light curve
modelmags, ptimes, pmags, perrs = (
flares.flare_model(
[amp, peaktime, risestdev, decayconst],
times,
mags,
errs
)
)
# update the mags
mags = modelmags
# add the flare params to the modeldict
params[flareind] = {'peaktime':peaktime,
'amplitude':amp,
'risestdev':risestdev,
'decayconst':decayconst}
#
# done with all flares
#
# return a dict with everything
modeldict = {
'vartype':'flare',
'params':params,
'times':times,
'mags':mags,
'errs':errs,
'varperiod':None,
# FIXME: this is complicated because we can have multiple flares
# figure out a good way to handle this upstream
'varamplitude':[params[x]['amplitude']
for x in range(params['nflares'])],
}
return modeldict
|
def generate_sinusoidal_lightcurve(
times,
mags=None,
errs=None,
paramdists={
'period':sps.uniform(loc=0.04,scale=500.0),
'fourierorder':[2,10],
'amplitude':sps.uniform(loc=0.1,scale=0.9),
'phioffset':0.0,
},
magsarefluxes=False
):
'''This generates fake sinusoidal light curves.
This can be used for a variety of sinusoidal variables, e.g. RRab, RRc,
Cepheids, Miras, etc. The functions that generate these model LCs below
implement the following table::
## FOURIER PARAMS FOR SINUSOIDAL VARIABLES
#
# type fourier period [days]
# order dist limits dist
# RRab 8 to 10 uniform 0.45--0.80 uniform
# RRc 3 to 6 uniform 0.10--0.40 uniform
# HADS 7 to 9 uniform 0.04--0.10 uniform
# rotator 2 to 5 uniform 0.80--120.0 uniform
# LPV 2 to 5 uniform 250--500.0 uniform
FIXME: for better model LCs, figure out how scipy.signal.butter works and
low-pass filter using scipy.signal.filtfilt.
Parameters
----------
times : np.array
This is an array of time values that will be used as the time base.
mags,errs : np.array
These arrays will have the model added to them. If either is
None, `np.full_like(times, 0.0)` will used as a substitute and the model
light curve will be centered around 0.0.
paramdists : dict
This is a dict containing parameter distributions to use for the
model params, containing the following keys ::
{'period', 'fourierorder', 'amplitude', 'phioffset'}
The values of these keys should all be 'frozen' scipy.stats distribution
objects, e.g.:
https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions
The variability epoch will be automatically chosen from a uniform
distribution between `times.min()` and `times.max()`.
The `amplitude` will be flipped automatically as appropriate if
`magsarefluxes=True`.
magsarefluxes : bool
If the generated time series is meant to be a flux time-series, set this
to True to get the correct sign of variability amplitude.
Returns
-------
dict
A dict of the form below is returned::
{'vartype': 'sinusoidal',
'params': {'period': generated value of period,
'epoch': generated value of epoch,
'amplitude': generated value of amplitude,
'fourierorder': generated value of fourier order,
'fourieramps': generated values of fourier amplitudes,
'fourierphases': generated values of fourier phases},
'times': the model times,
'mags': the model mags,
'errs': the model errs,
'varperiod': the generated period of variability == 'period'
'varamplitude': the generated amplitude of
variability == 'amplitude'}
'''
if mags is None:
mags = np.full_like(times, 0.0)
if errs is None:
errs = np.full_like(times, 0.0)
# choose the epoch
epoch = npr.random()*(times.max() - times.min()) + times.min()
# choose the period, fourierorder, and amplitude
period = paramdists['period'].rvs(size=1)
fourierorder = npr.randint(paramdists['fourierorder'][0],
high=paramdists['fourierorder'][1])
amplitude = paramdists['amplitude'].rvs(size=1)
# fix the amplitude if it needs to be flipped
if magsarefluxes and amplitude < 0.0:
amplitude = -amplitude
elif not magsarefluxes and amplitude > 0.0:
amplitude = -amplitude
# generate the amplitudes and phases of the Fourier components
ampcomps = [abs(amplitude/2.0)/float(x)
for x in range(1,fourierorder+1)]
phacomps = [paramdists['phioffset']*float(x)
for x in range(1,fourierorder+1)]
# now that we have our amp and pha components, generate the light curve
modelmags, phase, ptimes, pmags, perrs = sinusoidal.sine_series_sum(
[period, epoch, ampcomps, phacomps],
times,
mags,
errs
)
# resort in original time order
timeind = np.argsort(ptimes)
mtimes = ptimes[timeind]
mmags = modelmags[timeind]
merrs = perrs[timeind]
mphase = phase[timeind]
# return a dict with everything
modeldict = {
'vartype':'sinusoidal',
'params':{x:y for x,y in zip(['period',
'epoch',
'amplitude',
'fourierorder',
'fourieramps',
'fourierphases'],
[period,
epoch,
amplitude,
fourierorder,
ampcomps,
phacomps])},
'times':mtimes,
'mags':mmags,
'errs':merrs,
'phase':mphase,
# these are standard keys that help with later characterization of
# variability as a function period, variability amplitude, object mag,
# ndet, etc.
'varperiod':period,
'varamplitude':amplitude
}
return modeldict
|
def generate_rrab_lightcurve(
times,
mags=None,
errs=None,
paramdists={
'period':sps.uniform(loc=0.45,scale=0.35),
'fourierorder':[8,11],
'amplitude':sps.uniform(loc=0.4,scale=0.5),
'phioffset':np.pi,
},
magsarefluxes=False
):
'''This generates fake RRab light curves.
Parameters
----------
times : np.array
This is an array of time values that will be used as the time base.
mags,errs : np.array
These arrays will have the model added to them. If either is
None, `np.full_like(times, 0.0)` will used as a substitute and the model
light curve will be centered around 0.0.
paramdists : dict
This is a dict containing parameter distributions to use for the
model params, containing the following keys ::
{'period', 'fourierorder', 'amplitude'}
The values of these keys should all be 'frozen' scipy.stats distribution
objects, e.g.:
https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions
The variability epoch will be automatically chosen from a uniform
distribution between `times.min()` and `times.max()`.
The `amplitude` will be flipped automatically as appropriate if
`magsarefluxes=True`.
magsarefluxes : bool
If the generated time series is meant to be a flux time-series, set this
to True to get the correct sign of variability amplitude.
Returns
-------
dict
A dict of the form below is returned::
{'vartype': 'RRab',
'params': {'period': generated value of period,
'epoch': generated value of epoch,
'amplitude': generated value of amplitude,
'fourierorder': generated value of fourier order,
'fourieramps': generated values of fourier amplitudes,
'fourierphases': generated values of fourier phases},
'times': the model times,
'mags': the model mags,
'errs': the model errs,
'varperiod': the generated period of variability == 'period'
'varamplitude': the generated amplitude of
variability == 'amplitude'}
'''
modeldict = generate_sinusoidal_lightcurve(times,
mags=mags,
errs=errs,
paramdists=paramdists,
magsarefluxes=magsarefluxes)
modeldict['vartype'] = 'RRab'
return modeldict
|
def make_fakelc(lcfile,
outdir,
magrms=None,
randomizemags=True,
randomizecoords=False,
lcformat='hat-sql',
lcformatdir=None,
timecols=None,
magcols=None,
errcols=None):
'''This preprocesses an input real LC and sets it up to be a fake LC.
Parameters
----------
lcfile : str
This is an input light curve file that will be used to copy over the
time-base. This will be used to generate the time-base for fake light
curves to provide a realistic simulation of the observing window
function.
outdir : str
The output directory where the the fake light curve will be written.
magrms : dict
This is a dict containing the SDSS r mag-RMS (SDSS rmag-MAD preferably)
relation based on all light curves that the input lcfile is from. This
will be used to generate the median mag and noise corresponding to the
magnitude chosen for this fake LC.
randomizemags : bool
If this is True, then a random mag between the first and last magbin in
magrms will be chosen as the median mag for this light curve. This
choice will be weighted by the mag bin probability obtained from the
magrms kwarg. Otherwise, the median mag will be taken from the input
lcfile's lcdict['objectinfo']['sdssr'] key or a transformed SDSS r mag
generated from the input lcfile's lcdict['objectinfo']['jmag'],
['hmag'], and ['kmag'] keys. The magrms relation for each magcol will be
used to generate Gaussian noise at the correct level for the magbin this
light curve's median mag falls into.
randomizecoords : bool
If this is True, will randomize the RA, DEC of the output fake object
and not copy over the RA/DEC from the real input object.
lcformat : str
This is the `formatkey` associated with your input real light curve
format, which you previously passed in to the `lcproc.register_lcformat`
function. This will be used to look up how to find and read the light
curve specified in `lcfile`.
lcformatdir : str or None
If this is provided, gives the path to a directory when you've stored
your lcformat description JSONs, other than the usual directories lcproc
knows to search for them in. Use this along with `lcformat` to specify
an LC format JSON file that's not currently registered with lcproc.
timecols : list of str or None
The timecol keys to use from the input lcdict in generating the fake
light curve. Fake LCs will be generated for each each
timecol/magcol/errcol combination in the input light curve.
magcols : list of str or None
The magcol keys to use from the input lcdict in generating the fake
light curve. Fake LCs will be generated for each each
timecol/magcol/errcol combination in the input light curve.
errcols : list of str or None
The errcol keys to use from the input lcdict in generating the fake
light curve. Fake LCs will be generated for each each
timecol/magcol/errcol combination in the input light curve.
Returns
-------
tuple
A tuple of the following form is returned::
(fakelc_fpath,
fakelc_lcdict['columns'],
fakelc_lcdict['objectinfo'],
fakelc_lcdict['moments'])
'''
try:
formatinfo = get_lcformat(lcformat,
use_lcformat_dir=lcformatdir)
if formatinfo:
(fileglob, readerfunc,
dtimecols, dmagcols, derrcols,
magsarefluxes, normfunc) = formatinfo
else:
LOGERROR("can't figure out the light curve format")
return None
except Exception as e:
LOGEXCEPTION("can't figure out the light curve format")
return None
# override the default timecols, magcols, and errcols
# using the ones provided to the function
if timecols is None:
timecols = dtimecols
if magcols is None:
magcols = dmagcols
if errcols is None:
errcols = derrcols
# read in the light curve
lcdict = readerfunc(lcfile)
if isinstance(lcdict, tuple) and isinstance(lcdict[0],dict):
lcdict = lcdict[0]
# set up the fakelcdict with a randomly assigned objectid
fakeobjectid = sha512(npr.bytes(12)).hexdigest()[-8:]
fakelcdict = {
'objectid':fakeobjectid,
'objectinfo':{'objectid':fakeobjectid},
'columns':[],
'moments':{},
'origformat':lcformat,
}
# now, get the actual mag of this object and other info and use that to
# populate the corresponding entries of the fakelcdict objectinfo
if ('objectinfo' in lcdict and
isinstance(lcdict['objectinfo'], dict)):
objectinfo = lcdict['objectinfo']
# get the RA
if (not randomizecoords and 'ra' in objectinfo and
objectinfo['ra'] is not None and
np.isfinite(objectinfo['ra'])):
fakelcdict['objectinfo']['ra'] = objectinfo['ra']
else:
# if there's no RA available, we'll assign a random one between 0
# and 360.0
LOGWARNING('%s: assigning a random right ascension' % lcfile)
fakelcdict['objectinfo']['ra'] = npr.random()*360.0
# get the DEC
if (not randomizecoords and 'decl' in objectinfo and
objectinfo['decl'] is not None and
np.isfinite(objectinfo['decl'])):
fakelcdict['objectinfo']['decl'] = objectinfo['decl']
else:
# if there's no DECL available, we'll assign a random one between
# -90.0 and +90.0
LOGWARNING(' %s: assigning a random declination' % lcfile)
fakelcdict['objectinfo']['decl'] = npr.random()*180.0 - 90.0
# get the SDSS r mag for this object
# this will be used for getting the eventual mag-RMS relation later
if ((not randomizemags) and 'sdssr' in objectinfo and
objectinfo['sdssr'] is not None and
np.isfinite(objectinfo['sdssr'])):
fakelcdict['objectinfo']['sdssr'] = objectinfo['sdssr']
# if the SDSS r is unavailable, but we have J, H, K: use those to get
# the SDSS r by using transformations
elif ((not randomizemags) and ('jmag' in objectinfo and
objectinfo['jmag'] is not None and
np.isfinite(objectinfo['jmag'])) and
('hmag' in objectinfo and
objectinfo['hmag'] is not None and
np.isfinite(objectinfo['hmag'])) and
('kmag' in objectinfo and
objectinfo['kmag'] is not None and
np.isfinite(objectinfo['kmag']))):
LOGWARNING('used JHK mags to generate an SDSS r mag for %s' %
lcfile)
fakelcdict['objectinfo']['sdssr'] = jhk_to_sdssr(
objectinfo['jmag'],
objectinfo['hmag'],
objectinfo['kmag']
)
# if there are no mags available or we're specically told to randomize
# them, generate a random mag between 8 and 16.0
elif randomizemags and magrms:
LOGWARNING(' %s: assigning a random mag weighted by mag '
'bin probabilities' % lcfile)
magbins = magrms[magcols[0]]['binned_sdssr_median']
binprobs = magrms[magcols[0]]['magbin_probabilities']
# this is the center of the magbin chosen
magbincenter = npr.choice(magbins,size=1,p=binprobs)
# in this magbin, choose between center and -+ 0.25 mag
chosenmag = (
npr.random()*((magbincenter+0.25) - (magbincenter-0.25)) +
(magbincenter-0.25)
)
fakelcdict['objectinfo']['sdssr'] = np.asscalar(chosenmag)
# if there are no mags available at all, generate a random mag
# between 8 and 16.0
else:
LOGWARNING(' %s: assigning a random mag from '
'uniform distribution between 8.0 and 16.0' % lcfile)
fakelcdict['objectinfo']['sdssr'] = npr.random()*8.0 + 8.0
# if there's no info available, generate fake info
else:
LOGWARNING('no object information found in %s, '
'generating random ra, decl, sdssr' %
lcfile)
fakelcdict['objectinfo']['ra'] = npr.random()*360.0
fakelcdict['objectinfo']['decl'] = npr.random()*180.0 - 90.0
fakelcdict['objectinfo']['sdssr'] = npr.random()*8.0 + 8.0
#
# NOW FILL IN THE TIMES, MAGS, ERRS
#
# get the time columns
for tcind, tcol in enumerate(timecols):
if '.' in tcol:
tcolget = tcol.split('.')
else:
tcolget = [tcol]
if tcol not in fakelcdict:
fakelcdict[tcol] = _dict_get(lcdict, tcolget)
fakelcdict['columns'].append(tcol)
# update the ndet with the first time column's size. it's possible
# that different time columns have different lengths, but that would
# be weird and we won't deal with it for now
if tcind == 0:
fakelcdict['objectinfo']['ndet'] = fakelcdict[tcol].size
# get the mag columns
for mcol in magcols:
if '.' in mcol:
mcolget = mcol.split('.')
else:
mcolget = [mcol]
# put the mcol in only once
if mcol not in fakelcdict:
measuredmags = _dict_get(lcdict, mcolget)
measuredmags = measuredmags[np.isfinite(measuredmags)]
# if we're randomizing, get the mags from the interpolated mag-RMS
# relation
if (randomizemags and magrms and mcol in magrms and
'interpolated_magmad' in magrms[mcol] and
magrms[mcol]['interpolated_magmad'] is not None):
interpfunc = magrms[mcol]['interpolated_magmad']
lcmad = interpfunc(fakelcdict['objectinfo']['sdssr'])
fakelcdict['moments'][mcol] = {
'median': fakelcdict['objectinfo']['sdssr'],
'mad': lcmad
}
# if we're not randomizing, get the median and MAD from the light
# curve itself
else:
# we require at least 10 finite measurements
if measuredmags.size > 9:
measuredmedian = np.median(measuredmags)
measuredmad = np.median(
np.abs(measuredmags - measuredmedian)
)
fakelcdict['moments'][mcol] = {'median':measuredmedian,
'mad':measuredmad}
# if there aren't enough measurements in this LC, try to get the
# median and RMS from the interpolated mag-RMS relation first
else:
if (magrms and mcol in magrms and
'interpolated_magmad' in magrms[mcol] and
magrms[mcol]['interpolated_magmad'] is not None):
LOGWARNING(
'input LC %s does not have enough '
'finite measurements, '
'generating mag moments from '
'fakelc sdssr and the mag-RMS relation' % lcfile
)
interpfunc = magrms[mcol]['interpolated_magmad']
lcmad = interpfunc(fakelcdict['objectinfo']['sdssr'])
fakelcdict['moments'][mcol] = {
'median': fakelcdict['objectinfo']['sdssr'],
'mad': lcmad
}
# if we don't have the mag-RMS relation either, then we
# can't do anything for this light curve, generate a random
# MAD between 5e-4 and 0.1
else:
LOGWARNING(
'input LC %s does not have enough '
'finite measurements and '
'no mag-RMS relation provided '
'assigning a random MAD between 5.0e-4 and 0.1'
% lcfile
)
fakelcdict['moments'][mcol] = {
'median':fakelcdict['objectinfo']['sdssr'],
'mad':npr.random()*(0.1 - 5.0e-4) + 5.0e-4
}
# the magnitude column is set to all zeros initially. this will be
# filled in by the add_fakelc_variability function below
fakelcdict[mcol] = np.full_like(_dict_get(lcdict, mcolget), 0.0)
fakelcdict['columns'].append(mcol)
# get the err columns
for mcol, ecol in zip(magcols, errcols):
if '.' in ecol:
ecolget = ecol.split('.')
else:
ecolget = [ecol]
if ecol not in fakelcdict:
measurederrs = _dict_get(lcdict, ecolget)
measurederrs = measurederrs[np.isfinite(measurederrs)]
# if we're randomizing, get the errs from the interpolated mag-RMS
# relation
if (randomizemags and magrms and mcol in magrms and
'interpolated_magmad' in magrms[mcol] and
magrms[mcol]['interpolated_magmad'] is not None):
interpfunc = magrms[mcol]['interpolated_magmad']
lcmad = interpfunc(fakelcdict['objectinfo']['sdssr'])
# the median of the errs = lcmad
# the mad of the errs is 0.1 x lcmad
fakelcdict['moments'][ecol] = {
'median': lcmad,
'mad': 0.1*lcmad
}
else:
# we require at least 10 finite measurements
# we'll calculate the median and MAD of the errs to use later on
if measurederrs.size > 9:
measuredmedian = np.median(measurederrs)
measuredmad = np.median(
np.abs(measurederrs - measuredmedian)
)
fakelcdict['moments'][ecol] = {'median':measuredmedian,
'mad':measuredmad}
else:
if (magrms and mcol in magrms and
'interpolated_magmad' in magrms[mcol] and
magrms[mcol]['interpolated_magmad'] is not None):
LOGWARNING(
'input LC %s does not have enough '
'finite measurements, '
'generating err moments from '
'the mag-RMS relation' % lcfile
)
interpfunc = magrms[mcol]['interpolated_magmad']
lcmad = interpfunc(fakelcdict['objectinfo']['sdssr'])
fakelcdict['moments'][ecol] = {
'median': lcmad,
'mad': 0.1*lcmad
}
# if we don't have the mag-RMS relation either, then we
# can't do anything for this light curve, generate a random
# MAD between 5e-4 and 0.1
else:
LOGWARNING(
'input LC %s does not have '
'enough finite measurements and '
'no mag-RMS relation provided, '
'generating errs randomly' % lcfile
)
fakelcdict['moments'][ecol] = {
'median':npr.random()*(0.01 - 5.0e-4) + 5.0e-4,
'mad':npr.random()*(0.01 - 5.0e-4) + 5.0e-4
}
# the errors column is set to all zeros initially. this will be
# filled in by the add_fakelc_variability function below.
fakelcdict[ecol] = np.full_like(_dict_get(lcdict, ecolget), 0.0)
fakelcdict['columns'].append(ecol)
# add the timecols, magcols, errcols to the lcdict
fakelcdict['timecols'] = timecols
fakelcdict['magcols'] = magcols
fakelcdict['errcols'] = errcols
# generate an output file name
fakelcfname = '%s-fakelc.pkl' % fakelcdict['objectid']
fakelcfpath = os.path.abspath(os.path.join(outdir, fakelcfname))
# write this out to the output directory
with open(fakelcfpath,'wb') as outfd:
pickle.dump(fakelcdict, outfd, protocol=pickle.HIGHEST_PROTOCOL)
# return the fakelc path, its columns, info, and moments so we can put them
# into a collection DB later on
LOGINFO('real LC %s -> fake LC %s OK' % (lcfile, fakelcfpath))
return (fakelcfpath, fakelcdict['columns'],
fakelcdict['objectinfo'], fakelcdict['moments'])
|
def collection_worker(task):
'''
This wraps `process_fakelc` for `make_fakelc_collection` below.
Parameters
----------
task : tuple
This is of the form::
task[0] = lcfile
task[1] = outdir
task[2] = magrms
task[3] = dict with keys: {'lcformat', 'timecols', 'magcols',
'errcols', 'randomizeinfo'}
Returns
-------
tuple
This returns a tuple of the form::
(fakelc_fpath,
fakelc_lcdict['columns'],
fakelc_lcdict['objectinfo'],
fakelc_lcdict['moments'])
'''
lcfile, outdir, kwargs = task
try:
fakelcresults = make_fakelc(
lcfile,
outdir,
**kwargs
)
return fakelcresults
except Exception as e:
LOGEXCEPTION('could not process %s into a fakelc' % lcfile)
return None
|
def make_fakelc_collection(lclist,
simbasedir,
magrmsfrom,
magrms_interpolate='quadratic',
magrms_fillvalue='extrapolate',
maxlcs=25000,
maxvars=2000,
randomizemags=True,
randomizecoords=False,
vartypes=('EB','RRab','RRc','cepheid',
'rotator','flare','HADS',
'planet','LPV'),
lcformat='hat-sql',
lcformatdir=None,
timecols=None,
magcols=None,
errcols=None):
'''This prepares light curves for the recovery sim.
Collects light curves from `lclist` using a uniform sampling among
them. Copies them to the `simbasedir`, zeroes out their mags and errs but
keeps their time bases, also keeps their RMS and median mags for later
use. Calculates the mag-rms relation for the entire collection and writes
that to the `simbasedir` as well.
The purpose of this function is to copy over the time base and mag-rms
relation of an existing light curve collection to use it as the basis for a
variability recovery simulation.
This returns a pickle written to the `simbasedir` that contains all the
information for the chosen ensemble of fake light curves and writes all
generated light curves to the `simbasedir/lightcurves` directory. Run the
`add_variability_to_fakelc_collection` function after this function to add
variability of the specified type to these generated light curves.
Parameters
----------
lclist : list of str
This is a list of existing project light curves. This can be generated
from :py:func:`astrobase.lcproc.catalogs.make_lclist` or similar.
simbasedir : str
This is the directory to where the fake light curves and their
information will be copied to.
magrmsfrom : str or dict
This is used to generate magnitudes and RMSes for the objects in the
output collection of fake light curves. This arg is either a string
pointing to an existing pickle file that must contain a dict or a dict
variable that MUST have the following key-vals at a minimum::
{'<magcol1_name>': {
'binned_sdssr_median': array of median mags for each magbin
'binned_lcmad_median': array of LC MAD values per magbin
},
'<magcol2_name>': {
'binned_sdssr_median': array of median mags for each magbin
'binned_lcmad_median': array of LC MAD values per magbin
},
.
.
...}
where `magcol1_name`, etc. are the same as the `magcols` listed in the
magcols kwarg (or the default magcols for the specified
lcformat). Examples of the magrmsfrom dict (or pickle) required can be
generated by the
:py:func:`astrobase.lcproc.varthreshold.variability_threshold` function.
magrms_interpolate,magrms_fillvalue : str
These are arguments that will be passed directly to the
scipy.interpolate.interp1d function to generate interpolating functions
for the mag-RMS relation. See:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html
for details.
maxlcs : int
This is the total number of light curves to choose from `lclist` and
generate as fake LCs.
maxvars : int
This is the total number of fake light curves that will be marked as
variable.
vartypes : list of str
This is a list of variable types to put into the collection. The
vartypes for each fake variable star will be chosen uniformly from this
list.
lcformat : str
This is the `formatkey` associated with your input real light curves'
format, which you previously passed in to the `lcproc.register_lcformat`
function. This will be used to look up how to find and read the light
curves specified in `lclist`.
lcformatdir : str or None
If this is provided, gives the path to a directory when you've stored
your lcformat description JSONs, other than the usual directories lcproc
knows to search for them in. Use this along with `lcformat` to specify
an LC format JSON file that's not currently registered with lcproc.
timecols : list of str or None
The timecol keys to use from the input lcdict in generating the fake
light curve. Fake LCs will be generated for each each
timecol/magcol/errcol combination in the input light curves.
magcols : list of str or None
The magcol keys to use from the input lcdict in generating the fake
light curve. Fake LCs will be generated for each each
timecol/magcol/errcol combination in the input light curves.
errcols : list of str or None
The errcol keys to use from the input lcdict in generating the fake
light curve. Fake LCs will be generated for each each
timecol/magcol/errcol combination in the input light curves.
Returns
-------
str
Returns the string file name of a pickle containing all of the
information for the fake LC collection that has been generated.
'''
try:
formatinfo = get_lcformat(lcformat,
use_lcformat_dir=lcformatdir)
if formatinfo:
(fileglob, readerfunc,
dtimecols, dmagcols, derrcols,
magsarefluxes, normfunc) = formatinfo
else:
LOGERROR("can't figure out the light curve format")
return None
except Exception as e:
LOGEXCEPTION("can't figure out the light curve format")
return None
# override the default timecols, magcols, and errcols
# using the ones provided to the function
if timecols is None:
timecols = dtimecols
if magcols is None:
magcols = dmagcols
if errcols is None:
errcols = derrcols
if not isinstance(lclist, np.ndarray):
lclist = np.array(lclist)
chosenlcs = npr.choice(lclist, maxlcs, replace=False)
fakelcdir = os.path.join(simbasedir, 'lightcurves')
if not os.path.exists(fakelcdir):
os.makedirs(fakelcdir)
# get the magrms relation needed from the pickle or input dict
if isinstance(magrmsfrom, str) and os.path.exists(magrmsfrom):
with open(magrmsfrom,'rb') as infd:
xmagrms = pickle.load(infd)
elif isinstance(magrmsfrom, dict):
xmagrms = magrmsfrom
magrms = {}
# get the required items from the magrms dict. interpolate the mag-rms
# relation for the magcol so the make_fake_lc function can use it directly.
for magcol in magcols:
if (magcol in xmagrms and
'binned_sdssr_median' in xmagrms[magcol] and
'binned_lcmad_median' in xmagrms[magcol]):
magrms[magcol] = {
'binned_sdssr_median':np.array(
xmagrms[magcol]['binned_sdssr_median']
),
'binned_lcmad_median':np.array(
xmagrms[magcol]['binned_lcmad_median']
),
}
# interpolate the mag-MAD relation
interpolated_magmad = spi.interp1d(
xmagrms[magcol]['binned_sdssr_median'],
xmagrms[magcol]['binned_lcmad_median'],
kind=magrms_interpolate,
fill_value=magrms_fillvalue,
)
# save the magrms
magrms[magcol]['interpolated_magmad'] = interpolated_magmad
# generate the probability distribution in magbins. this is needed
# to correctly sample the objects in this population
magbins = np.array(xmagrms[magcol]['binned_sdssr_median'])
bincounts = np.array(xmagrms[magcol]['binned_count'])
binprobs = bincounts/np.sum(bincounts)
# save the bin probabilities as well
magrms[magcol]['magbin_probabilities'] = binprobs
else:
LOGWARNING('input magrms dict does not have '
'required info for magcol: %s' % magcol)
magrms[magcol] = {
'binned_sdssr_median':None,
'binned_lcmad_median':None,
'interpolated_magmad':None,
'magbin_probabilities':None,
}
tasks = [(x, fakelcdir, {'lcformat':lcformat,
'timecols':timecols,
'magcols':magcols,
'errcols':errcols,
'magrms':magrms,
'randomizemags':randomizemags,
'randomizecoords':randomizecoords})
for x in chosenlcs]
# we can't parallelize because it messes up the random number generation,
# causing all the IDs to clash. FIXME: figure out a way around this
# (probably initial a seed in each worker process?)
fakeresults = [collection_worker(task) for task in tasks]
fakedb = {'simbasedir':simbasedir,
'lcformat':lcformat,
'timecols':timecols,
'magcols':magcols,
'errcols':errcols,
'magsarefluxes':magsarefluxes}
fobjects, fpaths = [], []
fras, fdecls, fndets = [], [], []
fmags, fmagmads = [], []
ferrmeds, ferrmads = [], []
totalvars = 0
# these are the indices for the variable objects chosen randomly
isvariableind = npr.randint(0,high=len(fakeresults), size=maxvars)
isvariable = np.full(len(fakeresults), False, dtype=np.bool)
isvariable[isvariableind] = True
fakedb['isvariable'] = isvariable
LOGINFO('added %s variable stars' % maxvars)
# these are the variable types for each variable object
vartypeind = npr.randint(0,high=len(vartypes), size=maxvars)
vartypearr = np.array([vartypes[x] for x in vartypeind])
fakedb['vartype'] = vartypearr
for vt in sorted(vartypes):
LOGINFO('%s: %s stars' % (vt, vartypearr[vartypearr == vt].size))
# now go through the collection and get the mag/rms and err/rms for each
# star. these will be used later to add noise to light curves
LOGINFO('collecting info...')
for fr in fakeresults:
if fr is not None:
fpath, fcols, finfo, fmoments = fr
fobjects.append(finfo['objectid'])
fpaths.append(fpath)
fras.append(finfo['ra'])
fdecls.append(finfo['decl'])
fndets.append(finfo['ndet'])
fmags.append(finfo['sdssr'])
# this is per magcol
fmagmads.append([fmoments[x]['mad'] for x in magcols])
# these are per errcol
ferrmeds.append([fmoments[x]['median'] for x in errcols])
ferrmads.append([fmoments[x]['mad'] for x in errcols])
# convert to nparrays
fobjects = np.array(fobjects)
fpaths = np.array(fpaths)
fras = np.array(fras)
fdecls = np.array(fdecls)
fndets = np.array(fndets)
fmags = np.array(fmags)
fmagmads = np.array(fmagmads)
ferrmeds = np.array(ferrmeds)
ferrmads = np.array(ferrmads)
# put these in the fakedb
fakedb['objectid'] = fobjects
fakedb['lcfpath'] = fpaths
fakedb['ra'] = fras
fakedb['decl'] = fdecls
fakedb['ndet'] = fndets
fakedb['sdssr'] = fmags
fakedb['mad'] = fmagmads
fakedb['errmedian'] = ferrmeds
fakedb['errmad'] = ferrmads
# get the mag-RMS curve for this light curve collection for each magcol
fakedb['magrms'] = magrms
# finally, write the collection DB to a pickle in simbasedir
dboutfname = os.path.join(simbasedir,'fakelcs-info.pkl')
with open(dboutfname, 'wb') as outfd:
pickle.dump(fakedb, outfd)
LOGINFO('wrote %s fake LCs to: %s' % (len(fakeresults), simbasedir))
LOGINFO('fake LC info written to: %s' % dboutfname)
return dboutfname
|
def add_fakelc_variability(fakelcfile,
vartype,
override_paramdists=None,
magsarefluxes=False,
overwrite=False):
'''This adds variability of the specified type to the fake LC.
The procedure is (for each `magcol`):
- read the fakelcfile, get the stored moments and vartype info
- add the periodic variability specified in vartype and varparamdists. if
`vartype == None`, then do nothing in this step. If `override_vartype` is
not None, override stored vartype with specified vartype. If
`override_varparamdists` provided, override with specified
`varparamdists`. NOTE: the varparamdists must make sense for the vartype,
otherwise, weird stuff will happen.
- add the median mag level stored in `fakelcfile` to the time series
- add Gaussian noise to the light curve as specified in `fakelcfile`
- add a varinfo key and dict to the lcdict with `varperiod`, `varepoch`,
`varparams`
- write back to fake LC pickle
- return the `varinfo` dict to the caller
Parameters
----------
fakelcfile : str
The name of the fake LC file to process.
vartype : str
The type of variability to add to this fake LC file.
override_paramdists : dict
A parameter distribution dict as in the `generate_XX_lightcurve`
functions above. If provided, will override the distribution stored in
the input fake LC file itself.
magsarefluxes : bool
Sets if the variability amplitude is in fluxes and not magnitudes.
overwite : bool
This overwrites the input fake LC file with a new variable LC even if
it's been processed before.
Returns
-------
dict
A dict of the following form is returned::
{'objectid':lcdict['objectid'],
'lcfname':fakelcfile,
'actual_vartype':vartype,
'actual_varparams':lcdict['actual_varparams']}
'''
# read in the fakelcfile
lcdict = _read_pklc(fakelcfile)
# make sure to bail out if this light curve already has fake variability
# added
if ('actual_vartype' in lcdict and
'actual_varparams' in lcdict and
not overwrite):
LOGERROR('%s has existing variability type: %s '
'and params: %s and overwrite = False, '
'skipping this file...' %
(fakelcfile, lcdict['actual_vartype'],
repr(lcdict['actual_varparams'])))
return None
# get the times, mags, errs from this LC
timecols, magcols, errcols = (lcdict['timecols'],
lcdict['magcols'],
lcdict['errcols'])
# get the correct function to apply variability
if vartype in VARTYPE_LCGEN_MAP:
vargenfunc = VARTYPE_LCGEN_MAP[vartype]
elif vartype is None:
vargenfunc = None
else:
LOGERROR('unknown variability type: %s, choose from: %s' %
(vartype, repr(list(VARTYPE_LCGEN_MAP.keys()))))
return None
# 1. generate the variability, including the overrides if provided we do
# this outside the loop below to get the period, etc. distributions once
# only per object. NOTE: in doing so, we're assuming that the difference
# between magcols is just additive and the timebases for each magcol are the
# same; this is not strictly correct
if vargenfunc is not None:
if (override_paramdists is not None and
isinstance(override_paramdists,dict)):
variablelc = vargenfunc(lcdict[timecols[0]],
paramdists=override_paramdists,
magsarefluxes=magsarefluxes)
else:
variablelc = vargenfunc(lcdict[timecols[0]],
magsarefluxes=magsarefluxes)
# for nonvariables, don't execute vargenfunc, but return a similar dict
# so we can add the required noise to it
else:
variablelc = {'vartype':None,
'params':None,
'times':lcdict[timecols[0]],
'mags':np.full_like(lcdict[timecols[0]], 0.0),
'errs':np.full_like(lcdict[timecols[0]], 0.0)}
# now iterate over the time, mag, err columns
for tcol, mcol, ecol in zip(timecols, magcols, errcols):
times, mags, errs = lcdict[tcol], lcdict[mcol], lcdict[ecol]
# 2. get the moments for this magcol
mag_median = lcdict['moments'][mcol]['median']
mag_mad = lcdict['moments'][mcol]['mad']
# add up to 5 mmag of extra RMS for systematics and red-noise
mag_rms = mag_mad*1.483
err_median = lcdict['moments'][ecol]['median']
err_mad = lcdict['moments'][ecol]['mad']
err_rms = err_mad*1.483
# 3. add the median level + gaussian noise
magnoise = npr.normal(size=variablelc['mags'].size)*mag_rms
errnoise = npr.normal(size=variablelc['errs'].size)*err_rms
finalmags = mag_median + (variablelc['mags'] + magnoise)
finalerrs = err_median + (variablelc['errs'] + errnoise)
# 4. update these tcol, mcol, ecol values in the lcdict
lcdict[mcol] = finalmags
lcdict[ecol] = finalerrs
#
# all done with updating mags and errs
#
# 5. update the light curve with the variability info
lcdict['actual_vartype'] = variablelc['vartype']
lcdict['actual_varparams'] = variablelc['params']
# these standard keys are set to help out later with characterizing recovery
# rates by magnitude, period, amplitude, ndet, etc.
if vartype is not None:
lcdict['actual_varperiod'] = variablelc['varperiod']
lcdict['actual_varamplitude'] = variablelc['varamplitude']
else:
lcdict['actual_varperiod'] = np.nan
lcdict['actual_varamplitude'] = np.nan
# 6. write back, making sure to do it safely
tempoutf = '%s.%s' % (fakelcfile, md5(npr.bytes(4)).hexdigest()[-8:])
with open(tempoutf, 'wb') as outfd:
pickle.dump(lcdict, outfd, pickle.HIGHEST_PROTOCOL)
if os.path.exists(tempoutf):
shutil.copy(tempoutf, fakelcfile)
os.remove(tempoutf)
else:
LOGEXCEPTION('could not write output light curve file to dir: %s' %
os.path.dirname(tempoutf))
# fail here
raise
LOGINFO('object: %s, vartype: %s -> %s OK' % (
lcdict['objectid'],
vartype,
fakelcfile)
)
return {'objectid':lcdict['objectid'],
'lcfname':fakelcfile,
'actual_vartype':vartype,
'actual_varparams':lcdict['actual_varparams']}
|
def add_variability_to_fakelc_collection(simbasedir,
override_paramdists=None,
overwrite_existingvar=False):
'''This adds variability and noise to all fake LCs in `simbasedir`.
If an object is marked as variable in the `fakelcs-info`.pkl file in
`simbasedir`, a variable signal will be added to its light curve based on
its selected type, default period and amplitude distribution, the
appropriate params, etc. the epochs for each variable object will be chosen
uniformly from its time-range (and may not necessarily fall on a actual
observed time). Nonvariable objects will only have noise added as determined
by their params, but no variable signal will be added.
Parameters
----------
simbasedir : str
The directory containing the fake LCs to process.
override_paramdists : dict
This can be used to override the stored variable parameters in each fake
LC. It should be a dict of the following form::
{'<vartype1>': {'<param1>: a scipy.stats distribution function or
the np.random.randint function,
.
.
.
'<paramN>: a scipy.stats distribution function
or the np.random.randint function}
for any vartype in VARTYPE_LCGEN_MAP. These are used to override the
default parameter distributions for each variable type.
overwrite_existingvar : bool
If this is True, then will overwrite any existing variability in the
input fake LCs in `simbasedir`.
Returns
-------
dict
This returns a dict containing the fake LC filenames as keys and
variability info for each as values.
'''
# open the fakelcs-info.pkl
infof = os.path.join(simbasedir,'fakelcs-info.pkl')
with open(infof, 'rb') as infd:
lcinfo = pickle.load(infd)
lclist = lcinfo['lcfpath']
varflag = lcinfo['isvariable']
vartypes = lcinfo['vartype']
vartind = 0
varinfo = {}
# go through all the LCs and add the required type of variability
for lc, varf, _lcind in zip(lclist, varflag, range(len(lclist))):
# if this object is variable, add variability
if varf:
thisvartype = vartypes[vartind]
if (override_paramdists and
isinstance(override_paramdists, dict) and
thisvartype in override_paramdists and
isinstance(override_paramdists[thisvartype], dict)):
thisoverride_paramdists = override_paramdists[thisvartype]
else:
thisoverride_paramdists = None
varlc = add_fakelc_variability(
lc, thisvartype,
override_paramdists=thisoverride_paramdists,
overwrite=overwrite_existingvar
)
varinfo[varlc['objectid']] = {'params': varlc['actual_varparams'],
'vartype': varlc['actual_vartype']}
# update vartind
vartind = vartind + 1
else:
varlc = add_fakelc_variability(
lc, None,
overwrite=overwrite_existingvar
)
varinfo[varlc['objectid']] = {'params': varlc['actual_varparams'],
'vartype': varlc['actual_vartype']}
#
# done with all objects
#
# write the varinfo back to the dict and fakelcs-info.pkl
lcinfo['varinfo'] = varinfo
tempoutf = '%s.%s' % (infof, md5(npr.bytes(4)).hexdigest()[-8:])
with open(tempoutf, 'wb') as outfd:
pickle.dump(lcinfo, outfd, pickle.HIGHEST_PROTOCOL)
if os.path.exists(tempoutf):
shutil.copy(tempoutf, infof)
os.remove(tempoutf)
else:
LOGEXCEPTION('could not write output light curve file to dir: %s' %
os.path.dirname(tempoutf))
# fail here
raise
return lcinfo
|
def add_flare_model(flareparams,
times,
mags,
errs):
'''This adds a flare model function to the input magnitude/flux time-series.
Parameters
----------
flareparams : list of float
This defines the flare model::
[amplitude,
flare_peak_time,
rise_gaussian_stdev,
decay_time_constant]
where:
`amplitude`: the maximum flare amplitude in mags or flux. If flux, then
amplitude should be positive. If mags, amplitude should be negative.
`flare_peak_time`: time at which the flare maximum happens.
`rise_gaussian_stdev`: the stdev of the gaussian describing the rise of
the flare.
`decay_time_constant`: the time constant of the exponential fall of the
flare.
times,mags,errs : np.array
The input time-series of measurements and associated errors for which
the model will be generated. The times will be used to generate
model mags.
magsarefluxes : bool
Sets the correct direction of the flare amplitude (+ve) for fluxes if
True and for mags (-ve) if False.
Returns
-------
dict
A dict of the form below is returned::
{'times': the original times array
'mags': the original mags + the flare model mags evaluated at times,
'errs': the original errs array,
'flareparams': the input list of flare params}
'''
modelmags, ftimes, fmags, ferrs = flares.flare_model(
flareparams,
times,
mags,
errs
)
return {'times':times,
'mags':mags + modelmags,
'errs':errs,
'flareparams':flareparams}
|
def simple_flare_find(times, mags, errs,
smoothbinsize=97,
flare_minsigma=4.0,
flare_maxcadencediff=1,
flare_mincadencepoints=3,
magsarefluxes=False,
savgol_polyorder=2,
**savgol_kwargs):
'''This finds flares in time series using the method in Walkowicz+ 2011.
FIXME: finish this.
Parameters
----------
times,mags,errs : np.array
The input time-series to find flares in.
smoothbinsize : int
The number of consecutive light curve points to smooth over in the time
series using a Savitsky-Golay filter. The smoothed light curve is then
subtracted from the actual light curve to remove trends that potentially
last `smoothbinsize` light curve points. The default value is chosen as
~6.5 hours (97 x 4 minute cadence for HATNet/HATSouth).
flare_minsigma : float
The minimum sigma above the median LC level to designate points as
belonging to possible flares.
flare_maxcadencediff : int
The maximum number of light curve points apart each possible flare event
measurement is allowed to be. If this is 1, then we'll look for
consecutive measurements.
flare_mincadencepoints : int
The minimum number of light curve points (each `flare_maxcadencediff`
points apart) required that are at least `flare_minsigma` above the
median light curve level to call an event a flare.
magsarefluxes: bool
If True, indicates that mags is actually an array of fluxes.
savgol_polyorder: int
The polynomial order of the function used by the Savitsky-Golay filter.
savgol_kwargs : extra kwargs
Any remaining keyword arguments are passed directly to the
`savgol_filter` function from `scipy.signal`.
Returns
-------
(nflares, flare_indices) : tuple
Returns the total number of flares found and their time-indices (start,
end) as tuples.
'''
# if no errs are given, assume 0.1% errors
if errs is None:
errs = 0.001*mags
# get rid of nans first
finiteind = np.isfinite(times) & np.isfinite(mags) & np.isfinite(errs)
ftimes = times[finiteind]
fmags = mags[finiteind]
ferrs = errs[finiteind]
# now get the smoothed mag series using the filter
# kwargs are provided to the savgol_filter function
smoothed = savgol_filter(fmags,
smoothbinsize,
savgol_polyorder,
**savgol_kwargs)
subtracted = fmags - smoothed
# calculate some stats
# the series_median is ~zero after subtraction
series_mad = np.median(np.abs(subtracted))
series_stdev = 1.483*series_mad
# find extreme positive deviations
if magsarefluxes:
extind = np.where(subtracted > (flare_minsigma*series_stdev))
else:
extind = np.where(subtracted < (-flare_minsigma*series_stdev))
# see if there are any extrema
if extind and extind[0]:
extrema_indices = extind[0]
flaregroups = []
# find the deviations within the requested flaremaxcadencediff
for ind, extrema_index in enumerate(extrema_indices):
# FIXME: finish this
pass
|
def _smooth_acf(acf, windowfwhm=7, windowsize=21):
'''This returns a smoothed version of the ACF.
Convolves the ACF with a Gaussian of given `windowsize` and `windowfwhm`.
Parameters
----------
acf : np.array
The auto-correlation function array to smooth.
windowfwhm : int
The smoothing window Gaussian kernel's FWHM .
windowsize : int
The number of input points to apply the smoothing over.
Returns
-------
np.array
Smoothed version of the input ACF array.
'''
convkernel = Gaussian1DKernel(windowfwhm, x_size=windowsize)
smoothed = convolve(acf, convkernel, boundary='extend')
return smoothed
|
def _smooth_acf_savgol(acf, windowsize=21, polyorder=2):
'''
This returns a smoothed version of the ACF.
This version uses the Savitsky-Golay smoothing filter.
Parameters
----------
acf : np.array
The auto-correlation function array to smooth.
windowsize : int
The number of input points to apply the smoothing over.
polyorder : int
The order of the polynomial to use in the Savitsky-Golay filter.
Returns
-------
np.array
Smoothed version of the input ACF array.
'''
smoothed = savgol_filter(acf, windowsize, polyorder)
return smoothed
|
def _get_acf_peakheights(lags, acf, npeaks=20, searchinterval=1):
'''This calculates the relative peak heights for first npeaks in ACF.
Usually, the first peak or the second peak (if its peak height > first peak)
corresponds to the correct lag. When we know the correct lag, the period is
then::
bestperiod = time[lags == bestlag] - time[0]
Parameters
----------
lags : np.array
An array of lags that the ACF is calculated at.
acf : np.array
The array containing the ACF values.
npeaks : int
THe maximum number of peaks to consider when finding peak heights.
searchinterval : int
From `scipy.signal.argrelmax`: "How many points on each side to use for
the comparison to consider comparator(n, n+x) to be True." This
effectively sets how many points on each of the current peak will be
used to check if the current peak is the local maximum.
Returns
-------
dict
This returns a dict of the following form::
{'maxinds':the indices of the lag array where maxes are,
'maxacfs':the ACF values at each max,
'maxlags':the lag values at each max,
'mininds':the indices of the lag array where mins are,
'minacfs':the ACF values at each min,
'minlags':the lag values at each min,
'relpeakheights':the relative peak heights of each rel. ACF peak,
'relpeaklags':the lags at each rel. ACF peak found,
'peakindices':the indices of arrays where each rel. ACF peak is,
'bestlag':the lag value with the largest rel. ACF peak height,
'bestpeakheight':the largest rel. ACF peak height,
'bestpeakindex':the largest rel. ACF peak's number in all peaks}
'''
maxinds = argrelmax(acf, order=searchinterval)[0]
maxacfs = acf[maxinds]
maxlags = lags[maxinds]
mininds = argrelmin(acf, order=searchinterval)[0]
minacfs = acf[mininds]
minlags = lags[mininds]
relpeakheights = npzeros(npeaks)
relpeaklags = npzeros(npeaks,dtype=npint64)
peakindices = npzeros(npeaks,dtype=npint64)
for peakind, mxi in enumerate(maxinds[:npeaks]):
# check if there are no mins to the left
# throw away this peak because it's probably spurious
# (FIXME: is this OK?)
if npall(mxi < mininds):
continue
leftminind = mininds[mininds < mxi][-1] # the last index to the left
rightminind = mininds[mininds > mxi][0] # the first index to the right
relpeakheights[peakind] = (
acf[mxi] - (acf[leftminind] + acf[rightminind])/2.0
)
relpeaklags[peakind] = lags[mxi]
peakindices[peakind] = peakind
# figure out the bestperiod if possible
if relpeakheights[0] > relpeakheights[1]:
bestlag = relpeaklags[0]
bestpeakheight = relpeakheights[0]
bestpeakindex = peakindices[0]
else:
bestlag = relpeaklags[1]
bestpeakheight = relpeakheights[1]
bestpeakindex = peakindices[1]
return {'maxinds':maxinds,
'maxacfs':maxacfs,
'maxlags':maxlags,
'mininds':mininds,
'minacfs':minacfs,
'minlags':minlags,
'relpeakheights':relpeakheights,
'relpeaklags':relpeaklags,
'peakindices':peakindices,
'bestlag':bestlag,
'bestpeakheight':bestpeakheight,
'bestpeakindex':bestpeakindex}
|
def plot_acf_results(acfp, outfile, maxlags=5000, yrange=(-0.4,0.4)):
'''
This plots the unsmoothed/smoothed ACF vs lag.
Parameters
----------
acfp : dict
This is the dict returned from `macf_period_find` below.
outfile : str
The output file the plot will be written to.
maxlags: int
The maximum number of lags to include in the plot.
yrange : sequence of two floats
The y-range of the ACF vs. lag plot to use.
'''
import matplotlib.pyplot as plt
lags = acfp['acfresults']['lags'][:maxlags]
smoothedacf = acfp['acf'][:maxlags]
unsmoothedacf = acfp['acfresults']['acf'][:maxlags]
acfparams = acfp['kwargs']['smoothfunckwargs'].copy()
acfparams.update({'peakinterval': int(acfp['kwargs']['smoothacf']/2.0)})
# plot the ACFs
fig, ax1 = plt.subplots()
# this is lags vs acf
ax1.plot(lags, unsmoothedacf, label='unsmoothed ACF',color='#1f77b4')
ax1.plot(lags, smoothedacf, label='smoothed ACF', color='#ff7f0e')
ax1.set_xlim((0,maxlags))
ax1.set_xlabel('lags')
# overplot the identified peaks
acfmaxinds = acfp['acfpeaks']['maxinds']
for i, maxind in enumerate(acfmaxinds):
if i == 0:
ax1.axvline(maxind,
linewidth=2.0,
color='red',
ymin=0.2, ymax=0.3,
label='identified ACF peaks')
else:
ax1.axvline(maxind,
linewidth=2.0,
color='red',
ymin=0.2, ymax=0.3)
plt.ylabel('ACF')
plt.ylim(yrange)
ax1.legend()
plt.title('%s' % repr(acfparams))
plt.tight_layout()
plt.savefig(outfile)
plt.close('all')
return outfile
|
def macf_period_find(
times,
mags,
errs,
fillgaps=0.0,
filterwindow=11,
forcetimebin=None,
maxlags=None,
maxacfpeaks=10,
smoothacf=21, # set for Kepler-type LCs, see details below
smoothfunc=_smooth_acf_savgol,
smoothfunckwargs=None,
magsarefluxes=False,
sigclip=3.0,
verbose=True,
periodepsilon=0.1, # doesn't do anything, for consistent external API
nworkers=None, # doesn't do anything, for consistent external API
startp=None, # doesn't do anything, for consistent external API
endp=None, # doesn't do anything, for consistent external API
autofreq=None, # doesn't do anything, for consistent external API
stepsize=None, # doesn't do anything, for consistent external API
):
'''This finds periods using the McQuillan+ (2013a, 2014) ACF method.
The kwargs from `periodepsilon` to `stepsize` don't do anything but are used
to present a consistent API for all periodbase period-finders to an outside
driver (e.g. the one in the checkplotserver).
Parameters
----------
times,mags,errs : np.array
The input magnitude/flux time-series to run the period-finding for.
fillgaps : 'noiselevel' or float
This sets what to use to fill in gaps in the time series. If this is
'noiselevel', will smooth the light curve using a point window size of
`filterwindow` (this should be an odd integer), subtract the smoothed LC
from the actual LC and estimate the RMS. This RMS will be used to fill
in the gaps. Other useful values here are 0.0, and npnan.
filterwindow : int
The light curve's smoothing filter window size to use if
`fillgaps='noiselevel`'.
forcetimebin : None or float
This is used to force a particular cadence in the light curve other than
the automatically determined cadence. This effectively rebins the light
curve to this cadence. This should be in the same time units as `times`.
maxlags : None or int
This is the maximum number of lags to calculate. If None, will calculate
all lags.
maxacfpeaks : int
This is the maximum number of ACF peaks to use when finding the highest
peak and obtaining a fit period.
smoothacf : int
This is the number of points to use as the window size when smoothing
the ACF with the `smoothfunc`. This should be an odd integer value. If
this is None, will not smooth the ACF, but this will probably lead to
finding spurious peaks in a generally noisy ACF.
For Kepler, a value between 21 and 51 seems to work fine. For ground
based data, much larger values may be necessary: between 1001 and 2001
seem to work best for the HAT surveys. This is dependent on cadence, RMS
of the light curve, the periods of the objects you're looking for, and
finally, any correlated noise in the light curve. Make a plot of the
smoothed/unsmoothed ACF vs. lag using the result dict of this function
and the `plot_acf_results` function above to see the identified ACF
peaks and what kind of smoothing might be needed.
The value of `smoothacf` will also be used to figure out the interval to
use when searching for local peaks in the ACF: this interval is 1/2 of
the `smoothacf` value.
smoothfunc : Python function
This is the function that will be used to smooth the ACF. This should
take at least one kwarg: 'windowsize'. Other kwargs can be passed in
using a dict provided in `smoothfunckwargs`. By default, this uses a
Savitsky-Golay filter, a Gaussian filter is also provided but not
used. Another good option would be an actual low-pass filter (generated
using scipy.signal?) to remove all high frequency noise from the ACF.
smoothfunckwargs : dict or None
The dict of optional kwargs to pass in to the `smoothfunc`.
magsarefluxes : bool
If your input measurements in `mags` are actually fluxes instead of
mags, set this is True.
sigclip : float or int or sequence of two floats/ints or None
If a single float or int, a symmetric sigma-clip will be performed using
the number provided as the sigma-multiplier to cut out from the input
time-series.
If a list of two ints/floats is provided, the function will perform an
'asymmetric' sigma-clip. The first element in this list is the sigma
value to use for fainter flux/mag values; the second element in this
list is the sigma value to use for brighter flux/mag values. For
example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma
dimmings and greater than 3-sigma brightenings. Here the meaning of
"dimming" and "brightening" is set by *physics* (not the magnitude
system), which is why the `magsarefluxes` kwarg must be correctly set.
If `sigclip` is None, no sigma-clipping will be performed, and the
time-series (with non-finite elems removed) will be passed through to
the output.
verbose : bool
If True, will indicate progress and report errors.
Returns
-------
dict
Returns a dict with results. dict['bestperiod'] is the estimated best
period and dict['fitperiodrms'] is its estimated error. Other
interesting things in the output include:
- dict['acfresults']: all results from calculating the ACF. in
particular, the unsmoothed ACF might be of interest:
dict['acfresults']['acf'] and dict['acfresults']['lags'].
- dict['lags'] and dict['acf'] contain the ACF after smoothing was
applied.
- dict['periods'] and dict['lspvals'] can be used to construct a
pseudo-periodogram.
- dict['naivebestperiod'] is obtained by multiplying the lag at the
highest ACF peak with the cadence. This is usually close to the fit
period (dict['fitbestperiod']), which is calculated by doing a fit to
the lags vs. peak index relation as in McQuillan+ 2014.
'''
# get the ACF
acfres = autocorr_magseries(
times,
mags,
errs,
maxlags=maxlags,
fillgaps=fillgaps,
forcetimebin=forcetimebin,
sigclip=sigclip,
magsarefluxes=magsarefluxes,
filterwindow=filterwindow,
verbose=verbose
)
xlags = acfres['lags']
# smooth the ACF if requested
if smoothacf and isinstance(smoothacf, int) and smoothacf > 0:
if smoothfunckwargs is None:
sfkwargs = {'windowsize':smoothacf}
else:
sfkwargs = smoothfunckwargs.copy()
sfkwargs.update({'windowsize':smoothacf})
xacf = smoothfunc(acfres['acf'], **sfkwargs)
else:
xacf = acfres['acf']
# get the relative peak heights and fit best lag
peakres = _get_acf_peakheights(xlags, xacf, npeaks=maxacfpeaks,
searchinterval=int(smoothacf/2))
# this is the best period's best ACF peak height
bestlspval = peakres['bestpeakheight']
try:
# get the fit best lag from a linear fit to the peak index vs time(peak
# lag) function as in McQillian+ (2014)
fity = npconcatenate((
[0.0, peakres['bestlag']],
peakres['relpeaklags'][peakres['relpeaklags'] > peakres['bestlag']]
))
fity = fity*acfres['cadence']
fitx = nparange(fity.size)
fitcoeffs, fitcovar = nppolyfit(fitx, fity, 1, cov=True)
# fit best period is the gradient of fit
fitbestperiod = fitcoeffs[0]
bestperiodrms = npsqrt(fitcovar[0,0]) # from the covariance matrix
except Exception as e:
LOGWARNING('linear fit to time at each peak lag '
'value vs. peak number failed, '
'naively calculated ACF period may not be accurate')
fitcoeffs = nparray([npnan, npnan])
fitcovar = nparray([[npnan, npnan], [npnan, npnan]])
fitbestperiod = npnan
bestperiodrms = npnan
raise
# calculate the naive best period using delta_tau = lag * cadence
naivebestperiod = peakres['bestlag']*acfres['cadence']
if fitbestperiod < naivebestperiod:
LOGWARNING('fit bestperiod = %.5f may be an alias, '
'naively calculated bestperiod is = %.5f' %
(fitbestperiod, naivebestperiod))
if npisfinite(fitbestperiod):
bestperiod = fitbestperiod
else:
bestperiod = naivebestperiod
return {'bestperiod':bestperiod,
'bestlspval':bestlspval,
'nbestpeaks':maxacfpeaks,
# for compliance with the common pfmethod API
'nbestperiods':npconcatenate([
[fitbestperiod],
peakres['relpeaklags'][1:maxacfpeaks]*acfres['cadence']
]),
'nbestlspvals':peakres['maxacfs'][:maxacfpeaks],
'lspvals':xacf,
'periods':xlags*acfres['cadence'],
'acf':xacf,
'lags':xlags,
'method':'acf',
'naivebestperiod':naivebestperiod,
'fitbestperiod':fitbestperiod,
'fitperiodrms':bestperiodrms,
'periodfitcoeffs':fitcoeffs,
'periodfitcovar':fitcovar,
'kwargs':{'maxlags':maxlags,
'maxacfpeaks':maxacfpeaks,
'fillgaps':fillgaps,
'filterwindow':filterwindow,
'smoothacf':smoothacf,
'smoothfunckwargs':sfkwargs,
'magsarefluxes':magsarefluxes,
'sigclip':sigclip},
'acfresults':acfres,
'acfpeaks':peakres}
|
def _autocorr_func1(mags, lag, maglen, magmed, magstd):
'''Calculates the autocorr of mag series for specific lag.
This version of the function is taken from: Kim et al. (`2011
<https://dx.doi.org/10.1088/0004-637X/735/2/68>`_)
Parameters
----------
mags : np.array
This is the magnitudes array. MUST NOT have any nans.
lag : float
The specific lag value to calculate the auto-correlation for. This MUST
be less than total number of observations in `mags`.
maglen : int
The number of elements in the `mags` array.
magmed : float
The median of the `mags` array.
magstd : float
The standard deviation of the `mags` array.
Returns
-------
float
The auto-correlation at this specific `lag` value.
'''
lagindex = nparange(1,maglen-lag)
products = (mags[lagindex] - magmed) * (mags[lagindex+lag] - magmed)
acorr = (1.0/((maglen - lag)*magstd)) * npsum(products)
return acorr
|
def _autocorr_func2(mags, lag, maglen, magmed, magstd):
'''
This is an alternative function to calculate the autocorrelation.
This version is from (first definition):
https://en.wikipedia.org/wiki/Correlogram#Estimation_of_autocorrelations
Parameters
----------
mags : np.array
This is the magnitudes array. MUST NOT have any nans.
lag : float
The specific lag value to calculate the auto-correlation for. This MUST
be less than total number of observations in `mags`.
maglen : int
The number of elements in the `mags` array.
magmed : float
The median of the `mags` array.
magstd : float
The standard deviation of the `mags` array.
Returns
-------
float
The auto-correlation at this specific `lag` value.
'''
lagindex = nparange(0,maglen-lag)
products = (mags[lagindex] - magmed) * (mags[lagindex+lag] - magmed)
autocovarfunc = npsum(products)/lagindex.size
varfunc = npsum(
(mags[lagindex] - magmed)*(mags[lagindex] - magmed)
)/mags.size
acorr = autocovarfunc/varfunc
return acorr
|
def _autocorr_func3(mags, lag, maglen, magmed, magstd):
'''
This is yet another alternative to calculate the autocorrelation.
Taken from: `Bayesian Methods for Hackers by Cameron Pilon <http://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter3_MCMC/Chapter3.ipynb#Autocorrelation>`_
(This should be the fastest method to calculate ACFs.)
Parameters
----------
mags : np.array
This is the magnitudes array. MUST NOT have any nans.
lag : float
The specific lag value to calculate the auto-correlation for. This MUST
be less than total number of observations in `mags`.
maglen : int
The number of elements in the `mags` array.
magmed : float
The median of the `mags` array.
magstd : float
The standard deviation of the `mags` array.
Returns
-------
float
The auto-correlation at this specific `lag` value.
'''
# from http://tinyurl.com/afz57c4
result = npcorrelate(mags, mags, mode='full')
result = result / npmax(result)
return result[int(result.size / 2):]
|
def autocorr_magseries(times, mags, errs,
maxlags=1000,
func=_autocorr_func3,
fillgaps=0.0,
filterwindow=11,
forcetimebin=None,
sigclip=3.0,
magsarefluxes=False,
verbose=True):
'''This calculates the ACF of a light curve.
This will pre-process the light curve to fill in all the gaps and normalize
everything to zero. If `fillgaps = 'noiselevel'`, fills the gaps with the
noise level obtained via the procedure above. If `fillgaps = 'nan'`, fills
the gaps with `np.nan`.
Parameters
----------
times,mags,errs : np.array
The measurement time-series and associated errors.
maxlags : int
The maximum number of lags to calculate.
func : Python function
This is a function to calculate the lags.
fillgaps : 'noiselevel' or float
This sets what to use to fill in gaps in the time series. If this is
'noiselevel', will smooth the light curve using a point window size of
`filterwindow` (this should be an odd integer), subtract the smoothed LC
from the actual LC and estimate the RMS. This RMS will be used to fill
in the gaps. Other useful values here are 0.0, and npnan.
filterwindow : int
The light curve's smoothing filter window size to use if
`fillgaps='noiselevel`'.
forcetimebin : None or float
This is used to force a particular cadence in the light curve other than
the automatically determined cadence. This effectively rebins the light
curve to this cadence. This should be in the same time units as `times`.
sigclip : float or int or sequence of two floats/ints or None
If a single float or int, a symmetric sigma-clip will be performed using
the number provided as the sigma-multiplier to cut out from the input
time-series.
If a list of two ints/floats is provided, the function will perform an
'asymmetric' sigma-clip. The first element in this list is the sigma
value to use for fainter flux/mag values; the second element in this
list is the sigma value to use for brighter flux/mag values. For
example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma
dimmings and greater than 3-sigma brightenings. Here the meaning of
"dimming" and "brightening" is set by *physics* (not the magnitude
system), which is why the `magsarefluxes` kwarg must be correctly set.
If `sigclip` is None, no sigma-clipping will be performed, and the
time-series (with non-finite elems removed) will be passed through to
the output.
magsarefluxes : bool
If your input measurements in `mags` are actually fluxes instead of
mags, set this is True.
verbose : bool
If True, will indicate progress and report errors.
Returns
-------
dict
A dict of the following form is returned::
{'itimes': the interpolated time values after gap-filling,
'imags': the interpolated mag/flux values after gap-filling,
'ierrs': the interpolated mag/flux values after gap-filling,
'cadence': the cadence of the output mag/flux time-series,
'minitime': the minimum value of the interpolated times array,
'lags': the lags used to calculate the auto-correlation function,
'acf': the value of the ACF at each lag used}
'''
# get the gap-filled timeseries
interpolated = fill_magseries_gaps(times, mags, errs,
fillgaps=fillgaps,
forcetimebin=forcetimebin,
sigclip=sigclip,
magsarefluxes=magsarefluxes,
filterwindow=filterwindow,
verbose=verbose)
if not interpolated:
print('failed to interpolate light curve to minimum cadence!')
return None
itimes, imags = interpolated['itimes'], interpolated['imags'],
# calculate the lags up to maxlags
if maxlags:
lags = nparange(0, maxlags)
else:
lags = nparange(itimes.size)
series_stdev = 1.483*npmedian(npabs(imags))
if func != _autocorr_func3:
# get the autocorrelation as a function of the lag of the mag series
autocorr = nparray([func(imags, x, imags.size, 0.0, series_stdev)
for x in lags])
# this doesn't need a lags array
else:
autocorr = _autocorr_func3(imags, lags[0], imags.size,
0.0, series_stdev)
# return only the maximum number of lags
if maxlags is not None:
autocorr = autocorr[:maxlags]
interpolated.update({'minitime':itimes.min(),
'lags':lags,
'acf':autocorr})
return interpolated
|
def aovhm_theta(times, mags, errs, frequency,
nharmonics, magvariance):
'''This calculates the harmonic AoV theta statistic for a frequency.
This is a mostly faithful translation of the inner loop in `aovper.f90`. See
the following for details:
- http://users.camk.edu.pl/alex/
- Schwarzenberg-Czerny (`1996
<http://iopscience.iop.org/article/10.1086/309985/meta>`_)
Schwarzenberg-Czerny (1996) equation 11::
theta_prefactor = (K - 2N - 1)/(2N)
theta_top = sum(c_n*c_n) (from n=0 to n=2N)
theta_bot = variance(timeseries) - sum(c_n*c_n) (from n=0 to n=2N)
theta = theta_prefactor * (theta_top/theta_bot)
N = number of harmonics (nharmonics)
K = length of time series (times.size)
Parameters
----------
times,mags,errs : np.array
The input time-series to calculate the test statistic for. These should
all be of nans/infs and be normalized to zero.
frequency : float
The test frequency to calculate the statistic for.
nharmonics : int
The number of harmonics to calculate up to.The recommended range is 4 to
8.
magvariance : float
This is the (weighted by errors) variance of the magnitude time
series. We provide it as a pre-calculated value here so we don't have to
re-calculate it for every worker.
Returns
-------
aov_harmonic_theta : float
THe value of the harmonic AoV theta for the specified test `frequency`.
'''
period = 1.0/frequency
ndet = times.size
two_nharmonics = nharmonics + nharmonics
# phase with test period
phasedseries = phase_magseries_with_errs(
times, mags, errs, period, times[0],
sort=True, wrap=False
)
# get the phased quantities
phase = phasedseries['phase']
pmags = phasedseries['mags']
perrs = phasedseries['errs']
# this is sqrt(1.0/errs^2) -> the weights
pweights = 1.0/perrs
# multiply by 2.0*PI (for omega*time)
phase = phase * 2.0 * pi_value
# this is the z complex vector
z = npcos(phase) + 1.0j*npsin(phase)
# multiply phase with N
phase = nharmonics * phase
# this is the psi complex vector
psi = pmags * pweights * (npcos(phase) + 1j*npsin(phase))
# this is the initial value of z^n
zn = 1.0 + 0.0j
# this is the initial value of phi
phi = pweights + 0.0j
# initialize theta to zero
theta_aov = 0.0
# go through all the harmonics now up to 2N
for _ in range(two_nharmonics):
# this is <phi, phi>
phi_dot_phi = npsum(phi * phi.conjugate())
# this is the alpha_n numerator
alpha = npsum(pweights * z * phi)
# this is <phi, psi>. make sure to use npvdot and NOT npdot to get
# complex conjugate of first vector as expected for complex vectors
phi_dot_psi = npvdot(phi, psi)
# make sure phi_dot_phi is not zero
phi_dot_phi = npmax([phi_dot_phi, 10.0e-9])
# this is the expression for alpha_n
alpha = alpha / phi_dot_phi
# update theta_aov for this harmonic
theta_aov = (theta_aov +
npabs(phi_dot_psi) * npabs(phi_dot_psi) / phi_dot_phi)
# use the recurrence relation to find the next phi
phi = phi * z - alpha * zn * phi.conjugate()
# update z^n
zn = zn * z
# done with all harmonics, calculate the theta_aov for this freq
# the max below makes sure that magvariance - theta_aov > zero
theta_aov = ( (ndet - two_nharmonics - 1.0) * theta_aov /
(two_nharmonics * npmax([magvariance - theta_aov,
1.0e-9])) )
return theta_aov
|
def _aovhm_theta_worker(task):
'''
This is a parallel worker for the function below.
Parameters
----------
tasks : tuple
This is of the form below::
task[0] = times
task[1] = mags
task[2] = errs
task[3] = frequency
task[4] = nharmonics
task[5] = magvariance
Returns
-------
harmonic_aov_theta : float
The value of the harmonic AoV statistic for the test frequency used.
If something goes wrong with the calculation, nan is returned.
'''
times, mags, errs, frequency, nharmonics, magvariance = task
try:
theta = aovhm_theta(times, mags, errs, frequency,
nharmonics, magvariance)
return theta
except Exception as e:
return npnan
|
def open(self, database, user, password, host):
'''This opens a new database connection.
Parameters
----------
database : str
Name of the database to connect to.
user : str
User name of the database server user.
password : str
Password for the database server user.
host : str
Database hostname or IP address to connect to.
'''
try:
self.connection = pg.connect(user=user,
password=password,
database=database,
host=host)
LOGINFO('postgres connection successfully '
'created, using DB %s, user %s' % (database,
user))
self.database = database
self.user = user
except Exception as e:
LOGEXCEPTION('postgres connection failed, '
'using DB %s, user %s' % (database,
user))
self.database = None
self.user = None
|
def open_default(self):
'''
This opens the database connection using the default database parameters
given in the ~/.astrobase/astrobase.conf file.
'''
if HAVECONF:
self.open(DBDATA, DBUSER, DBPASS, DBHOST)
else:
LOGERROR("no default DB connection config found in lcdb.conf, "
"this function won't work otherwise")
|
def autocommit(self):
'''
This sets the database connection to autocommit. Must be called before
any cursors have been instantiated.
'''
if len(self.cursors.keys()) == 0:
self.connection.autocommit = True
else:
raise AttributeError('database cursors are already active, '
'cannot switch to autocommit now')
|
def cursor(self, handle, dictcursor=False):
'''This gets or creates a DB cursor for the current DB connection.
Parameters
----------
handle : str
The name of the cursor to look up in the existing list or if it
doesn't exist, the name to be used for a new cursor to be returned.
dictcursor : bool
If True, returns a cursor where each returned row can be addressed
as a dictionary by column name.
Returns
-------
psycopg2.Cursor instance
'''
if handle in self.cursors:
return self.cursors[handle]
else:
if dictcursor:
self.cursors[handle] = self.connection.cursor(
cursor_factory=psycopg2.extras.DictCursor
)
else:
self.cursors[handle] = self.connection.cursor()
return self.cursors[handle]
|
def newcursor(self, dictcursor=False):
'''
This creates a DB cursor for the current DB connection using a
randomly generated handle. Returns a tuple with cursor and handle.
Parameters
----------
dictcursor : bool
If True, returns a cursor where each returned row can be addressed
as a dictionary by column name.
Returns
-------
tuple
The tuple is of the form (handle, psycopg2.Cursor instance).
'''
handle = hashlib.sha256(os.urandom(12)).hexdigest()
if dictcursor:
self.cursors[handle] = self.connection.cursor(
cursor_factory=psycopg2.extras.DictCursor
)
else:
self.cursors[handle] = self.connection.cursor()
return (self.cursors[handle], handle)
|
def commit(self):
'''
This just calls the connection's commit method.
'''
if not self.connection.closed:
self.connection.commit()
else:
raise AttributeError('postgres connection to %s is closed' %
self.database)
|
def rollback(self):
'''
This just calls the connection's commit method.
'''
if not self.connection.closed:
self.connection.rollback()
else:
raise AttributeError('postgres connection to %s is closed' %
self.database)
|
def close_cursor(self, handle):
'''
Closes the cursor specified and removes it from the `self.cursors`
dictionary.
'''
if handle in self.cursors:
self.cursors[handle].close()
else:
raise KeyError('cursor with handle %s was not found' % handle)
|
def trapezoid_transit_func(transitparams, times, mags, errs,
get_ntransitpoints=False):
'''This returns a trapezoid transit-shaped function.
Suitable for first order modeling of transit signals.
Parameters
----------
transitparams : list of float
This contains the transiting planet trapezoid model::
transitparams = [transitperiod (time),
transitepoch (time),
transitdepth (flux or mags),
transitduration (phase),
ingressduration (phase)]
All of these will then have fitted values after the fit is done.
- for magnitudes -> `transitdepth` should be < 0
- for fluxes -> `transitdepth` should be > 0
times,mags,errs : np.array
The input time-series of measurements and associated errors for which
the transit model will be generated. The times will be used to generate
model mags, and the input `times`, `mags`, and `errs` will be resorted
by model phase and returned.
Returns
-------
(modelmags, phase, ptimes, pmags, perrs) : tuple
Returns the model mags and phase values. Also returns the input `times`,
`mags`, and `errs` sorted by the model's phase.
'''
(transitperiod,
transitepoch,
transitdepth,
transitduration,
ingressduration) = transitparams
# generate the phases
iphase = (times - transitepoch)/transitperiod
iphase = iphase - np.floor(iphase)
phasesortind = np.argsort(iphase)
phase = iphase[phasesortind]
ptimes = times[phasesortind]
pmags = mags[phasesortind]
perrs = errs[phasesortind]
zerolevel = np.median(pmags)
modelmags = np.full_like(phase, zerolevel)
halftransitduration = transitduration/2.0
bottomlevel = zerolevel - transitdepth
slope = transitdepth/ingressduration
# the four contact points of the eclipse
firstcontact = 1.0 - halftransitduration
secondcontact = firstcontact + ingressduration
thirdcontact = halftransitduration - ingressduration
fourthcontact = halftransitduration
## the phase indices ##
# during ingress
ingressind = (phase > firstcontact) & (phase < secondcontact)
# at transit bottom
bottomind = (phase > secondcontact) | (phase < thirdcontact)
# during egress
egressind = (phase > thirdcontact) & (phase < fourthcontact)
# count the number of points in transit
in_transit_points = ingressind | bottomind | egressind
n_transit_points = np.sum(in_transit_points)
# set the mags
modelmags[ingressind] = zerolevel - slope*(phase[ingressind] - firstcontact)
modelmags[bottomind] = bottomlevel
modelmags[egressind] = bottomlevel + slope*(phase[egressind] - thirdcontact)
if get_ntransitpoints:
return modelmags, phase, ptimes, pmags, perrs, n_transit_points
else:
return modelmags, phase, ptimes, pmags, perrs
|
def trapezoid_transit_residual(transitparams, times, mags, errs):
'''
This returns the residual between the modelmags and the actual mags.
Parameters
----------
transitparams : list of float
This contains the transiting planet trapezoid model::
transitparams = [transitperiod (time),
transitepoch (time),
transitdepth (flux or mags),
transitduration (phase),
ingressduration (phase)]
All of these will then have fitted values after the fit is done.
- for magnitudes -> `transitdepth` should be < 0
- for fluxes -> `transitdepth` should be > 0
times,mags,errs : np.array
The input time-series of measurements and associated errors for which
the transit model will be generated. The times will be used to generate
model mags, and the input `times`, `mags`, and `errs` will be resorted
by model phase and returned.
Returns
-------
np.array
The residuals between the input `mags` and generated `modelmags`,
weighted by the measurement errors in `errs`.
'''
modelmags, phase, ptimes, pmags, perrs = (
trapezoid_transit_func(transitparams, times, mags, errs)
)
# this is now a weighted residual taking into account the measurement err
return (pmags - modelmags)/perrs
|
def tap_query(querystr,
simbad_mirror='simbad',
returnformat='csv',
forcefetch=False,
cachedir='~/.astrobase/simbad-cache',
verbose=True,
timeout=10.0,
refresh=2.0,
maxtimeout=90.0,
maxtries=3,
complete_query_later=False,
jitter=5.0):
'''This queries the SIMBAD TAP service using the ADQL query string provided.
Parameters
----------
querystr : str
This is the ADQL query string. See:
http://www.ivoa.net/documents/ADQL/2.0 for the specification.
simbad_mirror : str
This is the key used to select a SIMBAD mirror from the
`SIMBAD_URLS` dict above. If set, the specified mirror will be used. If
None, a random mirror chosen from that dict will be used.
returnformat : {'csv','votable','json'}
The returned file format to request from the GAIA catalog service.
forcefetch : bool
If this is True, the query will be retried even if cached results for
it exist.
cachedir : str
This points to the directory where results will be downloaded.
verbose : bool
If True, will indicate progress and warn of any issues.
timeout : float
This sets the amount of time in seconds to wait for the service to
respond to our initial request.
refresh : float
This sets the amount of time in seconds to wait before checking if the
result file is available. If the results file isn't available after
`refresh` seconds have elapsed, the function will wait for `refresh`
seconds continuously, until `maxtimeout` is reached or the results file
becomes available.
maxtimeout : float
The maximum amount of time in seconds to wait for a result to become
available after submitting our query request.
maxtries : int
The maximum number of tries (across all mirrors tried) to make to either
submit the request or download the results, before giving up.
complete_query_later : bool
If set to True, a submitted query that does not return a result before
`maxtimeout` has passed will be cancelled but its input request
parameters and the result URL provided by the service will be saved. If
this function is then called later with these same input request
parameters, it will check if the query finally finished and a result is
available. If so, will download the results instead of submitting a new
query. If it's not done yet, will start waiting for results again. To
force launch a new query with the same request parameters, set the
`forcefetch` kwarg to True.
jitter : float
This is used to control the scale of the random wait in seconds before
starting the query. Useful in parallelized situations.
Returns
-------
dict
This returns a dict of the following form::
{'params':dict of the input params used for the query,
'provenance':'cache' or 'new download',
'result':path to the file on disk with the downloaded data table}
'''
# get the default params
inputparams = TAP_PARAMS.copy()
# update them with our input params
inputparams['QUERY'] = querystr[::]
if returnformat in RETURN_FORMATS:
inputparams['FORMAT'] = returnformat
else:
LOGWARNING('unknown result format: %s requested, using CSV' %
returnformat)
inputparams['FORMAT'] = 'csv'
# see if the cachedir exists
if '~' in cachedir:
cachedir = os.path.expanduser(cachedir)
if not os.path.exists(cachedir):
os.makedirs(cachedir)
# generate the cachefname and look for it
xcachekey = '-'.join([repr(inputparams[x])
for x in sorted(inputparams.keys())])
cachekey = hashlib.sha256(xcachekey.encode()).hexdigest()
cachefname = os.path.join(
cachedir,
'%s.%s' % (cachekey, RETURN_FORMATS[returnformat])
)
provenance = 'cache'
incomplete_qpklf = os.path.join(
cachedir,
'incomplete-query-%s' % cachekey
)
##########################################
## COMPLETE A QUERY THAT MAY BE RUNNING ##
##########################################
# first, check if this query can be resurrected
if (not forcefetch and
complete_query_later and
os.path.exists(incomplete_qpklf)):
with open(incomplete_qpklf, 'rb') as infd:
incomplete_qinfo = pickle.load(infd)
LOGWARNING('complete_query_later = True, and '
'this query was not completed on a '
'previous run, will check if it is done now...')
# get the status URL and go into a loop to see if the query completed
waitdone = False
timeelapsed = 0.0
simbad_mirror = incomplete_qinfo['simbad_mirror']
status_url = incomplete_qinfo['status_url']
phasekeyword = incomplete_qinfo['phase_keyword']
resultkeyword = incomplete_qinfo['result_keyword']
while not waitdone:
if timeelapsed > maxtimeout:
LOGERROR('SIMBAD TAP query still not done '
'after waiting %s seconds for results.\n'
'status URL is: %s' %
(maxtimeout,
repr(inputparams),
status_url))
return None
try:
resreq = requests.get(status_url,
timeout=timeout)
resreq.raise_for_status()
# parse the response XML and get the job status
resxml = parseString(resreq.text)
jobstatuselem = (
resxml.getElementsByTagName(phasekeyword)[0]
)
jobstatus = jobstatuselem.firstChild.toxml()
if jobstatus == 'COMPLETED':
if verbose:
LOGINFO('SIMBAD query completed, '
'retrieving results...')
waitdone = True
# if we're not done yet, then wait some more
elif jobstatus != 'ERROR':
if verbose:
LOGINFO('elapsed time: %.1f, '
'current status: %s, '
'status URL: %s, waiting...'
% (timeelapsed, jobstatus, status_url))
time.sleep(refresh)
timeelapsed = timeelapsed + refresh
# if the JOB failed, then bail out immediately
else:
LOGERROR('SIMBAD TAP query failed due to a server error.\n'
'status URL: %s\n'
'status contents: %s' %
(status_url,
resreq.text))
# since this job failed, remove the incomplete query pickle
# so we can try this from scratch
os.remove(incomplete_qpklf)
return None
except requests.exceptions.Timeout as e:
LOGEXCEPTION(
'SIMBAD query timed out while waiting for status '
'download results.\n'
'query: %s\n'
'status URL: %s' %
(repr(inputparams), status_url)
)
return None
except Exception as e:
LOGEXCEPTION(
'SIMBAD query failed while waiting for status\n'
'query: %s\n'
'status URL: %s\n'
'status contents: %s' %
(repr(inputparams),
status_url,
resreq.text)
)
# if the query fails completely, then either the status URL
# doesn't exist any more or something else went wrong. we'll
# remove the incomplete query pickle so we can try this from
# scratch
os.remove(incomplete_qpklf)
return None
#
# at this point, we should be ready to get the query results
#
LOGINFO('query completed, retrieving results...')
result_url_elem = resxml.getElementsByTagName(resultkeyword)[0]
result_url = result_url_elem.getAttribute('xlink:href')
result_nrows = result_url_elem.getAttribute('rows')
try:
resreq = requests.get(result_url, timeout=timeout)
resreq.raise_for_status()
if cachefname.endswith('.gz'):
with gzip.open(cachefname,'wb') as outfd:
for chunk in resreq.iter_content(chunk_size=65536):
outfd.write(chunk)
else:
with open(cachefname,'wb') as outfd:
for chunk in resreq.iter_content(chunk_size=65536):
outfd.write(chunk)
if verbose:
LOGINFO('done. rows in result: %s' % result_nrows)
tablefname = cachefname
provenance = 'cache'
# return a dict pointing to the result file
# we'll parse this later
resdict = {'params':inputparams,
'provenance':provenance,
'result':tablefname}
# all went well, so we'll remove the incomplete query pickle
os.remove(incomplete_qpklf)
return resdict
except requests.exceptions.Timeout as e:
LOGEXCEPTION(
'SIMBAD query timed out while trying to '
'download results.\n'
'query: %s\n'
'result URL: %s' %
(repr(inputparams), result_url)
)
return None
except Exception as e:
LOGEXCEPTION(
'SIMBAD query failed because of an error '
'while trying to download results.\n'
'query: %s\n'
'result URL: %s\n'
'response status code: %s' %
(repr(inputparams),
result_url,
resreq.status_code)
)
# if the result download fails, then either the result URL doesn't
# exist any more or something else went wrong. we'll remove the
# incomplete query pickle so we can try this from scratch
os.remove(incomplete_qpklf)
return None
#####################
## RUN A NEW QUERY ##
#####################
# otherwise, we check the cache if it's done already, or run it again if not
if forcefetch or (not os.path.exists(cachefname)):
provenance = 'new download'
time.sleep(random.randint(1,jitter))
# generate a jobid here and update the input params
jobid = 'ab-simbad-%i' % time.time()
inputparams['JOBNAME'] = jobid
inputparams['JOBDESCRIPTION'] = 'astrobase-simbad-tap-ADQL-query'
try:
waitdone = False
timeelapsed = 0.0
# set the simbad mirror to use
if simbad_mirror is not None and simbad_mirror in SIMBAD_URLS:
tapurl = SIMBAD_URLS[simbad_mirror]['url']
resultkeyword = SIMBAD_URLS[simbad_mirror]['resultkeyword']
phasekeyword = SIMBAD_URLS[simbad_mirror]['phasekeyword']
randkey = simbad_mirror
# sub in a table name if this is left unresolved in the input
# query
if '{table}' in querystr:
inputparams['QUERY'] = (
querystr.format(
table=SIMBAD_URLS[simbad_mirror]['table']
)
)
else:
randkey = random.choice(list(SIMBAD_URLS.keys()))
tapurl = SIMBAD_URLS[randkey]['url']
resultkeyword = SIMBAD_URLS[randkey]['resultkeyword']
phasekeyword = SIMBAD_URLS[randkey]['phasekeyword']
# sub in a table name if this is left unresolved in the input
# query
if '{table}' in querystr:
inputparams['QUERY'] = (
querystr.format(
table=SIMBAD_URLS[randkey]['table']
)
)
if verbose:
LOGINFO('using SIMBAD mirror TAP URL: %s' % tapurl)
# send the query and get status
if verbose:
LOGINFO(
'submitting SIMBAD TAP query request for input params: %s'
% repr(inputparams)
)
# here, we'll make sure the SIMBAD mirror works before doing
# anything else
mirrorok = False
ntries = 1
while (not mirrorok):
if ntries > maxtries:
LOGERROR('maximum number of allowed SIMBAD query '
'submission tries (%s) reached, bailing out...' %
maxtries)
return None
try:
req = requests.post(tapurl,
data=inputparams,
timeout=timeout)
resp_status = req.status_code
req.raise_for_status()
mirrorok = True
# this handles immediate 503s
except requests.exceptions.HTTPError as e:
LOGWARNING(
'SIMBAD TAP server: %s not responding, '
'trying another mirror...'
% tapurl
)
mirrorok = False
# for now, we have only one SIMBAD mirror to hit, so we'll
# wait a random time between 1 and 5 seconds to hit it again
remainingmirrors = list(SIMBAD_URLS.keys())
waittime = random.choice(range(1,6))
time.sleep(waittime)
randkey = remainingmirrors[0]
tapurl = SIMBAD_URLS[randkey]['url']
resultkeyword = SIMBAD_URLS[randkey]['resultkeyword']
phasekeyword = SIMBAD_URLS[randkey]['phasekeyword']
if '{table}' in querystr:
inputparams['QUERY'] = (
querystr.format(
table=SIMBAD_URLS[randkey]['table']
)
)
# this handles initial query submission timeouts
except requests.exceptions.Timeout as e:
LOGWARNING(
'SIMBAD TAP query submission timed out, '
'mirror is probably down. Trying another mirror...'
)
mirrorok = False
# for now, we have only one SIMBAD mirror to hit, so we'll
# wait a random time between 1 and 5 seconds to hit it again
remainingmirrors = list(SIMBAD_URLS.keys())
waittime = random.choice(range(1,6))
time.sleep(waittime)
randkey = remainingmirrors[0]
tapurl = SIMBAD_URLS[randkey]['url']
resultkeyword = SIMBAD_URLS[randkey]['resultkeyword']
phasekeyword = SIMBAD_URLS[randkey]['phasekeyword']
if '{table}' in querystr:
inputparams['QUERY'] = (
querystr.format(
table=SIMBAD_URLS[randkey]['table']
)
)
# update the number of submission tries
ntries = ntries + 1
# NOTE: python-requests follows the "303 See Other" redirect
# automatically, so we get the XML status doc immediately. We don't
# need to look up the location of it in the initial response's
# header as in the SIMBAD example.
status_url = req.url
# parse the response XML and get the job status
resxml = parseString(req.text)
jobstatuselem = resxml.getElementsByTagName(phasekeyword)
if jobstatuselem:
jobstatuselem = jobstatuselem[0]
else:
LOGERROR('could not parse job phase using '
'keyword %s in result XML' % phasekeyword)
LOGERROR('%s' % req.txt)
req.close()
return None
jobstatus = jobstatuselem.firstChild.toxml()
# if the job completed already, jump down to retrieving results
if jobstatus == 'COMPLETED':
if verbose:
LOGINFO('SIMBAD query completed, '
'retrieving results...')
waitdone = True
elif jobstatus == 'ERROR':
if verbose:
LOGERROR(
'SIMBAD query failed immediately '
'(probably an ADQL error): %s, '
'status URL: %s, status contents: %s' %
(repr(inputparams),
status_url,
req.text)
)
return None
# we wait for the job to complete if it's not done already
else:
if verbose:
LOGINFO(
'request submitted successfully, '
'current status is: %s. '
'waiting for results...' % jobstatus
)
while not waitdone:
if timeelapsed > maxtimeout:
LOGERROR('SIMBAD TAP query timed out '
'after waiting %s seconds for results.\n'
'request was: %s\n'
'status URL is: %s\n'
'last status was: %s' %
(maxtimeout,
repr(inputparams),
status_url,
jobstatus))
# here, we'll check if we're allowed to sleep on a query
# for a bit and return to it later if the last status
# was QUEUED or EXECUTING
if complete_query_later and jobstatus in ('EXECUTING',
'QUEUED'):
# write a pickle with the query params that we can
# pick up later to finish this query
incomplete_qpklf = os.path.join(
cachedir,
'incomplete-query-%s' % cachekey
)
with open(incomplete_qpklf, 'wb') as outfd:
savedict = inputparams.copy()
savedict['status_url'] = status_url
savedict['last_status'] = jobstatus
savedict['simbad_mirror'] = simbad_mirror
savedict['phase_keyword'] = phasekeyword
savedict['result_keyword'] = resultkeyword
pickle.dump(savedict,
outfd,
pickle.HIGHEST_PROTOCOL)
LOGINFO('complete_query_later = True, '
'last state of query was: %s, '
'will resume later if this function '
'is called again with the same query' %
jobstatus)
return None
time.sleep(refresh)
timeelapsed = timeelapsed + refresh
try:
resreq = requests.get(status_url, timeout=timeout)
resreq.raise_for_status()
# parse the response XML and get the job status
resxml = parseString(resreq.text)
jobstatuselem = (
resxml.getElementsByTagName(phasekeyword)[0]
)
jobstatus = jobstatuselem.firstChild.toxml()
if jobstatus == 'COMPLETED':
if verbose:
LOGINFO('SIMBAD query completed, '
'retrieving results...')
waitdone = True
else:
if verbose:
LOGINFO('elapsed time: %.1f, '
'current status: %s, '
'status URL: %s, waiting...'
% (timeelapsed, jobstatus, status_url))
continue
except requests.exceptions.Timeout as e:
LOGEXCEPTION(
'SIMBAD query timed out while waiting for results '
'download results.\n'
'query: %s\n'
'status URL: %s' %
(repr(inputparams), status_url)
)
return None
except Exception as e:
LOGEXCEPTION(
'SIMBAD query failed while waiting for results\n'
'query: %s\n'
'status URL: %s\n'
'status contents: %s' %
(repr(inputparams),
status_url,
resreq.text)
)
return None
#
# at this point, we should be ready to get the query results
#
result_url_elem = resxml.getElementsByTagName(resultkeyword)[0]
result_url = result_url_elem.getAttribute('xlink:href')
result_nrows = result_url_elem.getAttribute('rows')
try:
resreq = requests.get(result_url, timeout=timeout)
resreq.raise_for_status()
if cachefname.endswith('.gz'):
with gzip.open(cachefname,'wb') as outfd:
for chunk in resreq.iter_content(chunk_size=65536):
outfd.write(chunk)
else:
with open(cachefname,'wb') as outfd:
for chunk in resreq.iter_content(chunk_size=65536):
outfd.write(chunk)
if verbose:
LOGINFO('done. rows in result: %s' % result_nrows)
tablefname = cachefname
except requests.exceptions.Timeout as e:
LOGEXCEPTION(
'SIMBAD query timed out while trying to '
'download results.\n'
'query: %s\n'
'result URL: %s' %
(repr(inputparams), result_url)
)
return None
except Exception as e:
LOGEXCEPTION(
'SIMBAD query failed because of an error '
'while trying to download results.\n'
'query: %s\n'
'result URL: %s\n'
'response status code: %s' %
(repr(inputparams),
result_url,
resreq.status_code)
)
return None
except requests.exceptions.HTTPError as e:
LOGEXCEPTION('SIMBAD TAP query failed.\nrequest status was: '
'%s.\nquery was: %s' % (resp_status,
repr(inputparams)))
return None
except requests.exceptions.Timeout as e:
LOGERROR('SIMBAD TAP query submission timed out, '
'site is probably down. Request was: '
'%s' % repr(inputparams))
return None
except Exception as e:
LOGEXCEPTION('SIMBAD TAP query request failed for '
'%s' % repr(inputparams))
if 'resxml' in locals():
LOGERROR('HTTP response from service:\n%s' % req.text)
return None
############################
## GET RESULTS FROM CACHE ##
############################
else:
if verbose:
LOGINFO('getting cached SIMBAD query result for '
'request: %s' %
(repr(inputparams)))
tablefname = cachefname
# try to open the cached file to make sure it's OK
try:
infd = gzip.open(cachefname,'rb')
simbad_objectnames = np.genfromtxt(
infd,
names=True,
delimiter=',',
dtype='U20,f8,f8,U20,U20,U20,i8,U600,f8',
usecols=(0,1,2,3,4,5,6,7,8),
comments='?', # object names can have '#' in them
)
infd.close()
except Exception as e:
LOGEXCEPTION('could not read cached SIMBAD result file: %s, '
'fetching from server again' % cachefname)
return tap_query(querystr,
simbad_mirror=simbad_mirror,
returnformat=returnformat,
forcefetch=True,
cachedir=cachedir,
verbose=verbose,
timeout=timeout,
refresh=refresh,
maxtimeout=maxtimeout)
#
# all done with retrieval, now return the result dict
#
# return a dict pointing to the result file
# we'll parse this later
resdict = {'params':inputparams,
'provenance':provenance,
'result':tablefname}
return resdict
|
def objectnames_conesearch(racenter,
declcenter,
searchradiusarcsec,
simbad_mirror='simbad',
returnformat='csv',
forcefetch=False,
cachedir='~/.astrobase/simbad-cache',
verbose=True,
timeout=10.0,
refresh=2.0,
maxtimeout=90.0,
maxtries=1,
complete_query_later=True):
'''This queries the SIMBAD TAP service for a list of object names near the
coords. This is effectively a "reverse" name resolver (i.e. this does the
opposite of SESAME).
Parameters
----------
racenter,declcenter : float
The cone-search center coordinates in decimal degrees
searchradiusarcsec : float
The radius in arcseconds to search around the center coordinates.
simbad_mirror : str
This is the key used to select a SIMBAD mirror from the
`SIMBAD_URLS` dict above. If set, the specified mirror will be used. If
None, a random mirror chosen from that dict will be used.
returnformat : {'csv','votable','json'}
The returned file format to request from the GAIA catalog service.
forcefetch : bool
If this is True, the query will be retried even if cached results for
it exist.
cachedir : str
This points to the directory where results will be downloaded.
verbose : bool
If True, will indicate progress and warn of any issues.
timeout : float
This sets the amount of time in seconds to wait for the service to
respond to our initial request.
refresh : float
This sets the amount of time in seconds to wait before checking if the
result file is available. If the results file isn't available after
`refresh` seconds have elapsed, the function will wait for `refresh`
seconds continuously, until `maxtimeout` is reached or the results file
becomes available.
maxtimeout : float
The maximum amount of time in seconds to wait for a result to become
available after submitting our query request.
maxtries : int
The maximum number of tries (across all mirrors tried) to make to either
submit the request or download the results, before giving up.
complete_query_later : bool
If set to True, a submitted query that does not return a result before
`maxtimeout` has passed will be cancelled but its input request
parameters and the result URL provided by the service will be saved. If
this function is then called later with these same input request
parameters, it will check if the query finally finished and a result is
available. If so, will download the results instead of submitting a new
query. If it's not done yet, will start waiting for results again. To
force launch a new query with the same request parameters, set the
`forcefetch` kwarg to True.
Returns
-------
dict
This returns a dict of the following form::
{'params':dict of the input params used for the query,
'provenance':'cache' or 'new download',
'result':path to the file on disk with the downloaded data table}
'''
# this was generated using the example at:
# http://simbad.u-strasbg.fr/simbad/sim-tap and the table diagram at:
# http://simbad.u-strasbg.fr/simbad/tap/tapsearch.html
query = (
"select a.oid, a.ra, a.dec, a.main_id, a.otype_txt, "
"a.coo_bibcode, a.nbref, b.ids as all_ids, "
"(DISTANCE(POINT('ICRS', a.ra, a.dec), "
"POINT('ICRS', {ra_center:.5f}, {decl_center:.5f})))*3600.0 "
"AS dist_arcsec "
"from basic a join ids b on a.oid = b.oidref where "
"CONTAINS(POINT('ICRS',a.ra, a.dec),"
"CIRCLE('ICRS',{ra_center:.5f},{decl_center:.5f},"
"{search_radius:.6f}))=1 "
"ORDER by dist_arcsec asc "
)
formatted_query = query.format(ra_center=racenter,
decl_center=declcenter,
search_radius=searchradiusarcsec/3600.0)
return tap_query(formatted_query,
simbad_mirror=simbad_mirror,
returnformat=returnformat,
forcefetch=forcefetch,
cachedir=cachedir,
verbose=verbose,
timeout=timeout,
refresh=refresh,
maxtimeout=maxtimeout,
maxtries=maxtries,
complete_query_later=complete_query_later)
|
def xmatch_cplist_external_catalogs(cplist,
xmatchpkl,
xmatchradiusarcsec=2.0,
updateexisting=True,
resultstodir=None):
'''This xmatches external catalogs to a collection of checkplots.
Parameters
----------
cplist : list of str
This is the list of checkplot pickle files to process.
xmatchpkl : str
The filename of a pickle prepared beforehand with the
`checkplot.pkl_xmatch.load_xmatch_external_catalogs` function,
containing collected external catalogs to cross-match the objects in the
input `cplist` against.
xmatchradiusarcsec : float
The match radius to use for the cross-match in arcseconds.
updateexisting : bool
If this is True, will only update the `xmatch` dict in each checkplot
pickle with any new cross-matches to the external catalogs. If False,
will overwrite the `xmatch` dict with results from the current run.
resultstodir : str or None
If this is provided, then it must be a directory to write the resulting
checkplots to after xmatch is done. This can be used to keep the
original checkplots in pristine condition for some reason.
Returns
-------
dict
Returns a dict with keys = input checkplot pickle filenames and vals =
xmatch status dict for each checkplot pickle.
'''
# load the external catalog
with open(xmatchpkl,'rb') as infd:
xmd = pickle.load(infd)
# match each object. this is fairly fast, so this is not parallelized at the
# moment
status_dict = {}
for cpf in cplist:
cpd = _read_checkplot_picklefile(cpf)
try:
# match in place
xmatch_external_catalogs(cpd, xmd,
xmatchradiusarcsec=xmatchradiusarcsec,
updatexmatch=updateexisting)
for xmi in cpd['xmatch']:
if cpd['xmatch'][xmi]['found']:
LOGINFO('checkplot %s: %s matched to %s, '
'match dist: %s arcsec' %
(os.path.basename(cpf),
cpd['objectid'],
cpd['xmatch'][xmi]['name'],
cpd['xmatch'][xmi]['distarcsec']))
if not resultstodir:
outcpf = _write_checkplot_picklefile(cpd,
outfile=cpf)
else:
xcpf = os.path.join(resultstodir, os.path.basename(cpf))
outcpf = _write_checkplot_picklefile(cpd,
outfile=xcpf)
status_dict[cpf] = outcpf
except Exception as e:
LOGEXCEPTION('failed to match objects for %s' % cpf)
status_dict[cpf] = None
return status_dict
|
def xmatch_cpdir_external_catalogs(cpdir,
xmatchpkl,
cpfileglob='checkplot-*.pkl*',
xmatchradiusarcsec=2.0,
updateexisting=True,
resultstodir=None):
'''This xmatches external catalogs to all checkplots in a directory.
Parameters
-----------
cpdir : str
This is the directory to search in for checkplots.
xmatchpkl : str
The filename of a pickle prepared beforehand with the
`checkplot.pkl_xmatch.load_xmatch_external_catalogs` function,
containing collected external catalogs to cross-match the objects in the
input `cplist` against.
cpfileglob : str
This is the UNIX fileglob to use in searching for checkplots.
xmatchradiusarcsec : float
The match radius to use for the cross-match in arcseconds.
updateexisting : bool
If this is True, will only update the `xmatch` dict in each checkplot
pickle with any new cross-matches to the external catalogs. If False,
will overwrite the `xmatch` dict with results from the current run.
resultstodir : str or None
If this is provided, then it must be a directory to write the resulting
checkplots to after xmatch is done. This can be used to keep the
original checkplots in pristine condition for some reason.
Returns
-------
dict
Returns a dict with keys = input checkplot pickle filenames and vals =
xmatch status dict for each checkplot pickle.
'''
cplist = glob.glob(os.path.join(cpdir, cpfileglob))
return xmatch_cplist_external_catalogs(
cplist,
xmatchpkl,
xmatchradiusarcsec=xmatchradiusarcsec,
updateexisting=updateexisting,
resultstodir=resultstodir
)
|
def colormagdiagram_cplist(cplist,
outpkl,
color_mag1=['gaiamag','sdssg'],
color_mag2=['kmag','kmag'],
yaxis_mag=['gaia_absmag','rpmj']):
'''This makes color-mag diagrams for all checkplot pickles in the provided
list.
Can make an arbitrary number of CMDs given lists of x-axis colors and y-axis
mags to use.
Parameters
----------
cplist : list of str
This is the list of checkplot pickles to process.
outpkl : str
The filename of the output pickle that will contain the color-mag
information for all objects in the checkplots specified in `cplist`.
color_mag1 : list of str
This a list of the keys in each checkplot's `objectinfo` dict that will
be used as color_1 in the equation::
x-axis color = color_mag1 - color_mag2
color_mag2 : list of str
This a list of the keys in each checkplot's `objectinfo` dict that will
be used as color_2 in the equation::
x-axis color = color_mag1 - color_mag2
yaxis_mag : list of str
This is a list of the keys in each checkplot's `objectinfo` dict that
will be used as the (absolute) magnitude y-axis of the color-mag
diagrams.
Returns
-------
str
The path to the generated CMD pickle file for the collection of objects
in the input checkplot list.
Notes
-----
This can make many CMDs in one go. For example, the default kwargs for
`color_mag`, `color_mag2`, and `yaxis_mag` result in two CMDs generated and
written to the output pickle file:
- CMD1 -> gaiamag - kmag on the x-axis vs gaia_absmag on the y-axis
- CMD2 -> sdssg - kmag on the x-axis vs rpmj (J reduced PM) on the y-axis
'''
# first, we'll collect all of the info
cplist_objectids = []
cplist_mags = []
cplist_colors = []
for cpf in cplist:
cpd = _read_checkplot_picklefile(cpf)
cplist_objectids.append(cpd['objectid'])
thiscp_mags = []
thiscp_colors = []
for cm1, cm2, ym in zip(color_mag1, color_mag2, yaxis_mag):
if (ym in cpd['objectinfo'] and
cpd['objectinfo'][ym] is not None):
thiscp_mags.append(cpd['objectinfo'][ym])
else:
thiscp_mags.append(np.nan)
if (cm1 in cpd['objectinfo'] and
cpd['objectinfo'][cm1] is not None and
cm2 in cpd['objectinfo'] and
cpd['objectinfo'][cm2] is not None):
thiscp_colors.append(cpd['objectinfo'][cm1] -
cpd['objectinfo'][cm2])
else:
thiscp_colors.append(np.nan)
cplist_mags.append(thiscp_mags)
cplist_colors.append(thiscp_colors)
# convert these to arrays
cplist_objectids = np.array(cplist_objectids)
cplist_mags = np.array(cplist_mags)
cplist_colors = np.array(cplist_colors)
# prepare the outdict
cmddict = {'objectids':cplist_objectids,
'mags':cplist_mags,
'colors':cplist_colors,
'color_mag1':color_mag1,
'color_mag2':color_mag2,
'yaxis_mag':yaxis_mag}
# save the pickled figure and dict for fast retrieval later
with open(outpkl,'wb') as outfd:
pickle.dump(cmddict, outfd, pickle.HIGHEST_PROTOCOL)
plt.close('all')
return cmddict
|
def colormagdiagram_cpdir(
cpdir,
outpkl,
cpfileglob='checkplot*.pkl*',
color_mag1=['gaiamag','sdssg'],
color_mag2=['kmag','kmag'],
yaxis_mag=['gaia_absmag','rpmj']
):
'''This makes CMDs for all checkplot pickles in the provided directory.
Can make an arbitrary number of CMDs given lists of x-axis colors and y-axis
mags to use.
Parameters
----------
cpdir : list of str
This is the directory to get the list of input checkplot pickles from.
outpkl : str
The filename of the output pickle that will contain the color-mag
information for all objects in the checkplots specified in `cplist`.
cpfileglob : str
The UNIX fileglob to use to search for checkplot pickle files.
color_mag1 : list of str
This a list of the keys in each checkplot's `objectinfo` dict that will
be used as color_1 in the equation::
x-axis color = color_mag1 - color_mag2
color_mag2 : list of str
This a list of the keys in each checkplot's `objectinfo` dict that will
be used as color_2 in the equation::
x-axis color = color_mag1 - color_mag2
yaxis_mag : list of str
This is a list of the keys in each checkplot's `objectinfo` dict that
will be used as the (absolute) magnitude y-axis of the color-mag
diagrams.
Returns
-------
str
The path to the generated CMD pickle file for the collection of objects
in the input checkplot directory.
Notes
-----
This can make many CMDs in one go. For example, the default kwargs for
`color_mag`, `color_mag2`, and `yaxis_mag` result in two CMDs generated and
written to the output pickle file:
- CMD1 -> gaiamag - kmag on the x-axis vs gaia_absmag on the y-axis
- CMD2 -> sdssg - kmag on the x-axis vs rpmj (J reduced PM) on the y-axis
'''
cplist = glob.glob(os.path.join(cpdir, cpfileglob))
return colormagdiagram_cplist(cplist,
outpkl,
color_mag1=color_mag1,
color_mag2=color_mag2,
yaxis_mag=yaxis_mag)
|
def add_cmd_to_checkplot(
cpx,
cmdpkl,
require_cmd_magcolor=True,
save_cmd_pngs=False
):
'''This adds CMD figures to a checkplot dict or pickle.
Looks up the CMDs in `cmdpkl`, adds the object from `cpx` as a gold(-ish)
star in the plot, and then saves the figure to a base64 encoded PNG, which
can then be read and used by the `checkplotserver`.
Parameters
----------
cpx : str or dict
This is the input checkplot pickle or dict to add the CMD to.
cmdpkl : str or dict
The CMD pickle generated by the `colormagdiagram_cplist` or
`colormagdiagram_cpdir` functions above, or the dict produced by reading
this pickle in.
require_cmd_magcolor : bool
If this is True, a CMD plot will not be made if the color and mag keys
required by the CMD are not present or are nan in this checkplot's
objectinfo dict.
save_cmd_png : bool
If this is True, then will save the CMD plots that were generated and
added back to the checkplotdict as PNGs to the same directory as
`cpx`. If `cpx` is a dict, will save them to the current working
directory.
Returns
-------
str or dict
If `cpx` was a str filename of checkplot pickle, this will return that
filename to indicate that the CMD was added to the file. If `cpx` was a
checkplotdict, this will return the checkplotdict with a new key called
'colormagdiagram' containing the base64 encoded PNG binary streams of
all CMDs generated.
'''
# get the checkplot
if isinstance(cpx, str) and os.path.exists(cpx):
cpdict = _read_checkplot_picklefile(cpx)
elif isinstance(cpx, dict):
cpdict = cpx
else:
LOGERROR('unknown type of checkplot provided as the cpx arg')
return None
# get the CMD
if isinstance(cmdpkl, str) and os.path.exists(cmdpkl):
with open(cmdpkl, 'rb') as infd:
cmd = pickle.load(infd)
elif isinstance(cmdpkl, dict):
cmd = cmdpkl
cpdict['colormagdiagram'] = {}
# get the mags and colors from the CMD dict
cplist_mags = cmd['mags']
cplist_colors = cmd['colors']
# now make the CMD plots for each color-mag combination in the CMD
for c1, c2, ym, ind in zip(cmd['color_mag1'],
cmd['color_mag2'],
cmd['yaxis_mag'],
range(len(cmd['color_mag1']))):
# get these from the checkplot for this object
if (c1 in cpdict['objectinfo'] and
cpdict['objectinfo'][c1] is not None):
c1mag = cpdict['objectinfo'][c1]
else:
c1mag = np.nan
if (c2 in cpdict['objectinfo'] and
cpdict['objectinfo'][c2] is not None):
c2mag = cpdict['objectinfo'][c2]
else:
c2mag = np.nan
if (ym in cpdict['objectinfo'] and
cpdict['objectinfo'][ym] is not None):
ymmag = cpdict['objectinfo'][ym]
else:
ymmag = np.nan
if (require_cmd_magcolor and
not (np.isfinite(c1mag) and
np.isfinite(c2mag) and
np.isfinite(ymmag))):
LOGWARNING("required color: %s-%s or mag: %s are not "
"in this checkplot's objectinfo dict "
"(objectid: %s), skipping CMD..." %
(c1, c2, ym, cpdict['objectid']))
continue
# make the CMD for this color-mag combination
try:
thiscmd_title = r'%s-%s/%s' % (CMD_LABELS[c1],
CMD_LABELS[c2],
CMD_LABELS[ym])
# make the scatter plot
fig = plt.figure(figsize=(10,8))
plt.plot(cplist_colors[:,ind],
cplist_mags[:,ind],
rasterized=True,
marker='o',
linestyle='none',
mew=0,
ms=3)
# put this object on the plot
plt.plot([c1mag - c2mag], [ymmag],
ms=20,
color='#b0ff05',
marker='*',
mew=0)
plt.xlabel(r'$%s - %s$' % (CMD_LABELS[c1], CMD_LABELS[c2]))
plt.ylabel(r'$%s$' % CMD_LABELS[ym])
plt.title('%s - $%s$ CMD' % (cpdict['objectid'], thiscmd_title))
plt.gca().invert_yaxis()
# now save the figure to StrIO and put it back in the checkplot
cmdpng = StrIO()
plt.savefig(cmdpng, bbox_inches='tight',
pad_inches=0.0, format='png')
cmdpng.seek(0)
cmdb64 = base64.b64encode(cmdpng.read())
cmdpng.close()
plt.close('all')
plt.gcf().clear()
cpdict['colormagdiagram']['%s-%s/%s' % (c1,c2,ym)] = cmdb64
# if we're supposed to export to PNG, do so
if save_cmd_pngs:
if isinstance(cpx, str):
outpng = os.path.join(os.path.dirname(cpx),
'cmd-%s-%s-%s.%s.png' %
(cpdict['objectid'],
c1,c2,ym))
else:
outpng = 'cmd-%s-%s-%s.%s.png' % (cpdict['objectid'],
c1,c2,ym)
_base64_to_file(cmdb64, outpng)
except Exception as e:
LOGEXCEPTION('CMD for %s-%s/%s does not exist in %s, skipping...' %
(c1, c2, ym, cmdpkl))
continue
#
# end of making CMDs
#
if isinstance(cpx, str):
cpf = _write_checkplot_picklefile(cpdict, outfile=cpx, protocol=4)
return cpf
elif isinstance(cpx, dict):
return cpdict
|
def add_cmds_cplist(cplist, cmdpkl,
require_cmd_magcolor=True,
save_cmd_pngs=False):
'''This adds CMDs for each object in cplist.
Parameters
----------
cplist : list of str
This is the input list of checkplot pickles to add the CMDs to.
cmdpkl : str
This is the filename of the CMD pickle created previously.
require_cmd_magcolor : bool
If this is True, a CMD plot will not be made if the color and mag keys
required by the CMD are not present or are nan in each checkplot's
objectinfo dict.
save_cmd_pngs : bool
If this is True, then will save the CMD plots that were generated and
added back to the checkplotdict as PNGs to the same directory as
`cpx`.
Returns
-------
Nothing.
'''
# load the CMD first to save on IO
with open(cmdpkl,'rb') as infd:
cmd = pickle.load(infd)
for cpf in cplist:
add_cmd_to_checkplot(cpf, cmd,
require_cmd_magcolor=require_cmd_magcolor,
save_cmd_pngs=save_cmd_pngs)
|
def add_cmds_cpdir(cpdir,
cmdpkl,
cpfileglob='checkplot*.pkl*',
require_cmd_magcolor=True,
save_cmd_pngs=False):
'''This adds CMDs for each object in cpdir.
Parameters
----------
cpdir : list of str
This is the directory to search for checkplot pickles.
cmdpkl : str
This is the filename of the CMD pickle created previously.
cpfileglob : str
The UNIX fileglob to use when searching for checkplot pickles to operate
on.
require_cmd_magcolor : bool
If this is True, a CMD plot will not be made if the color and mag keys
required by the CMD are not present or are nan in each checkplot's
objectinfo dict.
save_cmd_pngs : bool
If this is True, then will save the CMD plots that were generated and
added back to the checkplotdict as PNGs to the same directory as
`cpx`.
Returns
-------
Nothing.
'''
cplist = glob.glob(os.path.join(cpdir, cpfileglob))
return add_cmds_cplist(cplist,
cmdpkl,
require_cmd_magcolor=require_cmd_magcolor,
save_cmd_pngs=save_cmd_pngs)
|
def cp_objectinfo_worker(task):
'''This is a parallel worker for `parallel_update_cp_objectinfo`.
Parameters
----------
task : tuple
- task[0] = checkplot pickle file
- task[1] = kwargs
Returns
-------
str
The name of the checkplot file that was updated. None if the update
fails for some reason.
'''
cpf, cpkwargs = task
try:
newcpf = update_checkplot_objectinfo(cpf, **cpkwargs)
return newcpf
except Exception as e:
LOGEXCEPTION('failed to update objectinfo for %s' % cpf)
return None
|
def parallel_update_objectinfo_cplist(
cplist,
liststartindex=None,
maxobjects=None,
nworkers=NCPUS,
fast_mode=False,
findercmap='gray_r',
finderconvolve=None,
deredden_object=True,
custom_bandpasses=None,
gaia_submit_timeout=10.0,
gaia_submit_tries=3,
gaia_max_timeout=180.0,
gaia_mirror=None,
complete_query_later=True,
lclistpkl=None,
nbrradiusarcsec=60.0,
maxnumneighbors=5,
plotdpi=100,
findercachedir='~/.astrobase/stamp-cache',
verbose=True
):
'''
This updates objectinfo for a list of checkplots.
Useful in cases where a previous round of GAIA/finderchart/external catalog
acquisition failed. This will preserve the following keys in the checkplots
if they exist:
comments
varinfo
objectinfo.objecttags
Parameters
----------
cplist : list of str
A list of checkplot pickle file names to update.
liststartindex : int
The index of the input list to start working at.
maxobjects : int
The maximum number of objects to process in this run. Use this with
`liststartindex` to effectively distribute working on a large list of
input checkplot pickles over several sessions or machines.
nworkers : int
The number of parallel workers that will work on the checkplot
update process.
fast_mode : bool or float
This runs the external catalog operations in a "fast" mode, with short
timeouts and not trying to hit external catalogs that take a long time
to respond. See the docstring for
`checkplot.pkl_utils._pkl_finder_objectinfo` for details on how this
works. If this is True, will run in "fast" mode with default timeouts (5
seconds in most cases). If this is a float, will run in "fast" mode with
the provided timeout value in seconds.
findercmap : str or matplotlib.cm.Colormap object
findercmap : str or matplotlib.cm.ColorMap object
The Colormap object to use for the finder chart image.
finderconvolve : astropy.convolution.Kernel object or None
If not None, the Kernel object to use for convolving the finder image.
deredden_objects : bool
If this is True, will use the 2MASS DUST service to get extinction
coefficients in various bands, and then try to deredden the magnitudes
and colors of the object already present in the checkplot's objectinfo
dict.
custom_bandpasses : dict
This is a dict used to provide custom bandpass definitions for any
magnitude measurements in the objectinfo dict that are not automatically
recognized by the `varclass.starfeatures.color_features` function. See
its docstring for details on the required format.
gaia_submit_timeout : float
Sets the timeout in seconds to use when submitting a request to look up
the object's information to the GAIA service. Note that if `fast_mode`
is set, this is ignored.
gaia_submit_tries : int
Sets the maximum number of times the GAIA services will be contacted to
obtain this object's information. If `fast_mode` is set, this is
ignored, and the services will be contacted only once (meaning that a
failure to respond will be silently ignored and no GAIA data will be
added to the checkplot's objectinfo dict).
gaia_max_timeout : float
Sets the timeout in seconds to use when waiting for the GAIA service to
respond to our request for the object's information. Note that if
`fast_mode` is set, this is ignored.
gaia_mirror : str
This sets the GAIA mirror to use. This is a key in the
`services.gaia.GAIA_URLS` dict which defines the URLs to hit for each
mirror.
complete_query_later : bool
If this is True, saves the state of GAIA queries that are not yet
complete when `gaia_max_timeout` is reached while waiting for the GAIA
service to respond to our request. A later call for GAIA info on the
same object will attempt to pick up the results from the existing query
if it's completed. If `fast_mode` is True, this is ignored.
lclistpkl : dict or str
If this is provided, must be a dict resulting from reading a catalog
produced by the `lcproc.catalogs.make_lclist` function or a str path
pointing to the pickle file produced by that function. This catalog is
used to find neighbors of the current object in the current light curve
collection. Looking at neighbors of the object within the radius
specified by `nbrradiusarcsec` is useful for light curves produced by
instruments that have a large pixel scale, so are susceptible to
blending of variability and potential confusion of neighbor variability
with that of the actual object being looked at. If this is None, no
neighbor lookups will be performed.
nbrradiusarcsec : float
The radius in arcseconds to use for a search conducted around the
coordinates of this object to look for any potential confusion and
blending of variability amplitude caused by their proximity.
maxnumneighbors : int
The maximum number of neighbors that will have their light curves and
magnitudes noted in this checkplot as potential blends with the target
object.
plotdpi : int
The resolution in DPI of the plots to generate in this function
(e.g. the finder chart, etc.)
findercachedir : str
The path to the astrobase cache directory for finder chart downloads
from the NASA SkyView service.
verbose : bool
If True, will indicate progress and warn about potential problems.
Returns
-------
list of str
Paths to the updated checkplot pickle file.
'''
# work around the Darwin segfault after fork if no network activity in
# main thread bug: https://bugs.python.org/issue30385#msg293958
if sys.platform == 'darwin':
import requests
requests.get('http://captive.apple.com/hotspot-detect.html')
# handle the start and end indices
if (liststartindex is not None) and (maxobjects is None):
cplist = cplist[liststartindex:]
elif (liststartindex is None) and (maxobjects is not None):
cplist = cplist[:maxobjects]
elif (liststartindex is not None) and (maxobjects is not None):
cplist = (
cplist[liststartindex:liststartindex+maxobjects]
)
tasks = [(x, {'fast_mode':fast_mode,
'findercmap':findercmap,
'finderconvolve':finderconvolve,
'deredden_object':deredden_object,
'custom_bandpasses':custom_bandpasses,
'gaia_submit_timeout':gaia_submit_timeout,
'gaia_submit_tries':gaia_submit_tries,
'gaia_max_timeout':gaia_max_timeout,
'gaia_mirror':gaia_mirror,
'complete_query_later':complete_query_later,
'lclistpkl':lclistpkl,
'nbrradiusarcsec':nbrradiusarcsec,
'maxnumneighbors':maxnumneighbors,
'plotdpi':plotdpi,
'findercachedir':findercachedir,
'verbose':verbose}) for x in cplist]
resultfutures = []
results = []
with ProcessPoolExecutor(max_workers=nworkers) as executor:
resultfutures = executor.map(cp_objectinfo_worker, tasks)
results = [x for x in resultfutures]
executor.shutdown()
return results
|
def parallel_update_objectinfo_cpdir(cpdir,
cpglob='checkplot-*.pkl*',
liststartindex=None,
maxobjects=None,
nworkers=NCPUS,
fast_mode=False,
findercmap='gray_r',
finderconvolve=None,
deredden_object=True,
custom_bandpasses=None,
gaia_submit_timeout=10.0,
gaia_submit_tries=3,
gaia_max_timeout=180.0,
gaia_mirror=None,
complete_query_later=True,
lclistpkl=None,
nbrradiusarcsec=60.0,
maxnumneighbors=5,
plotdpi=100,
findercachedir='~/.astrobase/stamp-cache',
verbose=True):
'''This updates the objectinfo for a directory of checkplot pickles.
Useful in cases where a previous round of GAIA/finderchart/external catalog
acquisition failed. This will preserve the following keys in the checkplots
if they exist:
comments
varinfo
objectinfo.objecttags
Parameters
----------
cpdir : str
The directory to look for checkplot pickles in.
cpglob : str
The UNIX fileglob to use when searching for checkplot pickle files.
liststartindex : int
The index of the input list to start working at.
maxobjects : int
The maximum number of objects to process in this run. Use this with
`liststartindex` to effectively distribute working on a large list of
input checkplot pickles over several sessions or machines.
nworkers : int
The number of parallel workers that will work on the checkplot
update process.
fast_mode : bool or float
This runs the external catalog operations in a "fast" mode, with short
timeouts and not trying to hit external catalogs that take a long time
to respond. See the docstring for
`checkplot.pkl_utils._pkl_finder_objectinfo` for details on how this
works. If this is True, will run in "fast" mode with default timeouts (5
seconds in most cases). If this is a float, will run in "fast" mode with
the provided timeout value in seconds.
findercmap : str or matplotlib.cm.Colormap object
findercmap : str or matplotlib.cm.ColorMap object
The Colormap object to use for the finder chart image.
finderconvolve : astropy.convolution.Kernel object or None
If not None, the Kernel object to use for convolving the finder image.
deredden_objects : bool
If this is True, will use the 2MASS DUST service to get extinction
coefficients in various bands, and then try to deredden the magnitudes
and colors of the object already present in the checkplot's objectinfo
dict.
custom_bandpasses : dict
This is a dict used to provide custom bandpass definitions for any
magnitude measurements in the objectinfo dict that are not automatically
recognized by the `varclass.starfeatures.color_features` function. See
its docstring for details on the required format.
gaia_submit_timeout : float
Sets the timeout in seconds to use when submitting a request to look up
the object's information to the GAIA service. Note that if `fast_mode`
is set, this is ignored.
gaia_submit_tries : int
Sets the maximum number of times the GAIA services will be contacted to
obtain this object's information. If `fast_mode` is set, this is
ignored, and the services will be contacted only once (meaning that a
failure to respond will be silently ignored and no GAIA data will be
added to the checkplot's objectinfo dict).
gaia_max_timeout : float
Sets the timeout in seconds to use when waiting for the GAIA service to
respond to our request for the object's information. Note that if
`fast_mode` is set, this is ignored.
gaia_mirror : str
This sets the GAIA mirror to use. This is a key in the
`services.gaia.GAIA_URLS` dict which defines the URLs to hit for each
mirror.
complete_query_later : bool
If this is True, saves the state of GAIA queries that are not yet
complete when `gaia_max_timeout` is reached while waiting for the GAIA
service to respond to our request. A later call for GAIA info on the
same object will attempt to pick up the results from the existing query
if it's completed. If `fast_mode` is True, this is ignored.
lclistpkl : dict or str
If this is provided, must be a dict resulting from reading a catalog
produced by the `lcproc.catalogs.make_lclist` function or a str path
pointing to the pickle file produced by that function. This catalog is
used to find neighbors of the current object in the current light curve
collection. Looking at neighbors of the object within the radius
specified by `nbrradiusarcsec` is useful for light curves produced by
instruments that have a large pixel scale, so are susceptible to
blending of variability and potential confusion of neighbor variability
with that of the actual object being looked at. If this is None, no
neighbor lookups will be performed.
nbrradiusarcsec : float
The radius in arcseconds to use for a search conducted around the
coordinates of this object to look for any potential confusion and
blending of variability amplitude caused by their proximity.
maxnumneighbors : int
The maximum number of neighbors that will have their light curves and
magnitudes noted in this checkplot as potential blends with the target
object.
plotdpi : int
The resolution in DPI of the plots to generate in this function
(e.g. the finder chart, etc.)
findercachedir : str
The path to the astrobase cache directory for finder chart downloads
from the NASA SkyView service.
verbose : bool
If True, will indicate progress and warn about potential problems.
Returns
-------
list of str
Paths to the updated checkplot pickle file.
'''
cplist = sorted(glob.glob(os.path.join(cpdir, cpglob)))
return parallel_update_objectinfo_cplist(
cplist,
liststartindex=liststartindex,
maxobjects=maxobjects,
nworkers=nworkers,
fast_mode=fast_mode,
findercmap=findercmap,
finderconvolve=finderconvolve,
deredden_object=deredden_object,
custom_bandpasses=custom_bandpasses,
gaia_submit_timeout=gaia_submit_timeout,
gaia_submit_tries=gaia_submit_tries,
gaia_max_timeout=gaia_max_timeout,
gaia_mirror=gaia_mirror,
complete_query_later=complete_query_later,
lclistpkl=lclistpkl,
nbrradiusarcsec=nbrradiusarcsec,
maxnumneighbors=maxnumneighbors,
plotdpi=plotdpi,
findercachedir=findercachedir,
verbose=verbose
)
|
def checkplot_infokey_worker(task):
'''This gets the required keys from the requested file.
Parameters
----------
task : tuple
Task is a two element tuple::
- task[0] is the dict to work on
- task[1] is a list of lists of str indicating all the key address to
extract items from the dict for
Returns
-------
list
This is a list of all of the items at the requested key addresses.
'''
cpf, keys = task
cpd = _read_checkplot_picklefile(cpf)
resultkeys = []
for k in keys:
try:
resultkeys.append(_dict_get(cpd, k))
except Exception as e:
resultkeys.append(np.nan)
return resultkeys
|
def main():
'''This is the main function of this script.
The current script args are shown below ::
Usage: checkplotlist [-h] [--search SEARCH] [--sortby SORTBY]
[--filterby FILTERBY] [--splitout SPLITOUT]
[--outprefix OUTPREFIX] [--maxkeyworkers MAXKEYWORKERS]
{pkl,png} cpdir
This makes a checkplot file list for use with the checkplot-viewer.html
(for checkplot PNGs) or the checkplotserver.py (for checkplot pickles)
webapps.
positional arguments:
{pkl,png} type of checkplot to search for: pkl -> checkplot
pickles, png -> checkplot PNGs
cpdir directory containing the checkplots to process
optional arguments:
-h, --help show this help message and exit
--search SEARCH file glob prefix to use when searching for checkplots,
default: '*checkplot*', (the extension is added
automatically - .png or .pkl)
--sortby SORTBY the sort key and order to use when sorting
--filterby FILTERBY the filter key and condition to use when filtering.
you can specify this multiple times to filter by
several keys at once. all filters are joined with a
logical AND operation in the order they're given.
--splitout SPLITOUT if there are more than SPLITOUT objects in the target
directory (default: 5000), checkplotlist will split
the output JSON into multiple files. this helps keep
the checkplotserver webapp responsive.
--outprefix OUTPREFIX
a prefix string to use for the output JSON file(s).
use this to separate out different sort orders or
filter conditions, for example. if this isn't
provided, but --sortby or --filterby are, will use
those to figure out the output files' prefixes
--maxkeyworkers MAXKEYWORKERS
the number of parallel workers that will be launched
to retrieve checkplot key values used for sorting and
filtering (default: 2)
'''
####################
## PARSE THE ARGS ##
####################
aparser = argparse.ArgumentParser(
epilog=PROGEPILOG,
description=PROGDESC,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
aparser.add_argument(
'cptype',
action='store',
choices=['pkl','png'],
type=str,
help=("type of checkplot to search for: pkl -> checkplot pickles, "
"png -> checkplot PNGs")
)
aparser.add_argument(
'cpdir',
action='store',
type=str,
help=("directory containing the checkplots to process")
)
# TODO: here, make the --search kwarg an array (i.e. allow multiple search
# statements). the use of this will be to make checkplotserver able to load
# more than one checkplot per object (i.e. different mag types -- epd
# vs. tfa -- or different bands -- r vs. i -- at the SAME time).
# TODO: we'll fix checkplotserver and its js so there's a vertical tab
# column between the left period/epoch/tags panel and main
# periodogram/phased-LCs panel on the right. the user will be able to flip
# between tabs to look at the object in all loaded alternative checkplots.
# TODO: need to also think about to sort/filter; for now let's make it so
# the sorting works on a chosen checkplot search list, if we give --search
# 'checkplot*iep1' and --search 'checkplot*itf1', specify --sortpkls and
# --filterpkls kwargs, which match the given globs for the --search
# kwargs. e.g. we'd specify --sortpkls 'checkplot*iep1' to sort everything
# by the specified --sortby values in those pickles.
# TODO: we'll have to change the output JSON so it's primarily by objectid
# instead of checkplot filenames. each objectid will have its own list of
# checkplots to use for the frontend.
aparser.add_argument(
'--search',
action='store',
default='*checkplot*',
type=str,
help=("file glob prefix to use when searching for checkplots, "
"default: '%(default)s', "
"(the extension is added automatically - .png or .pkl)")
)
aparser.add_argument(
'--sortby',
action='store',
type=str,
help=("the sort key and order to use when sorting")
)
aparser.add_argument(
'--filterby',
action='append',
type=str,
help=("the filter key and condition to use when filtering. "
"you can specify this multiple times to filter by "
"several keys at once. all filters are joined with a "
"logical AND operation in the order they're given.")
)
aparser.add_argument(
'--splitout',
action='store',
type=int,
default=5000,
help=("if there are more than SPLITOUT objects in "
"the target directory (default: %(default)s), "
"checkplotlist will split the output JSON into multiple files. "
"this helps keep the checkplotserver webapp responsive.")
)
aparser.add_argument(
'--outprefix',
action='store',
type=str,
help=("a prefix string to use for the output JSON file(s). "
"use this to separate out different sort orders "
"or filter conditions, for example. "
"if this isn't provided, but --sortby or --filterby are, "
"will use those to figure out the output files' prefixes")
)
aparser.add_argument(
'--maxkeyworkers',
action='store',
type=int,
default=int(CPU_COUNT/4.0),
help=("the number of parallel workers that will be launched "
"to retrieve checkplot key values used for "
"sorting and filtering (default: %(default)s)")
)
args = aparser.parse_args()
checkplotbasedir = args.cpdir
fileglob = args.search
splitout = args.splitout
outprefix = args.outprefix if args.outprefix else None
# see if there's a sorting order
if args.sortby:
sortkey, sortorder = args.sortby.split('|')
if outprefix is None:
outprefix = args.sortby
else:
sortkey, sortorder = 'objectid', 'asc'
# see if there's a filter condition
if args.filterby:
filterkeys, filterconditions = [], []
# load all the filters
for filt in args.filterby:
f = filt.split('|')
filterkeys.append(f[0])
filterconditions.append(f[1])
# generate the output file's prefix
if outprefix is None:
outprefix = '-'.join(args.filterby)
else:
outprefix = '%s-%s' % ('-'.join(args.filterby), outprefix)
else:
filterkeys, filterconditions = None, None
if args.cptype == 'pkl':
checkplotext = 'pkl'
elif args.cptype == 'png':
checkplotext = 'png'
else:
print("unknown format for checkplots: %s! can't continue!"
% args.cptype)
sys.exit(1)
#######################
## NOW START WORKING ##
#######################
currdir = os.getcwd()
checkplotglob = os.path.join(checkplotbasedir,
'%s.%s' % (fileglob, checkplotext))
print('searching for checkplots: %s' % checkplotglob)
searchresults = glob.glob(checkplotglob)
if searchresults:
print('found %s checkplot files in dir: %s' %
(len(searchresults), checkplotbasedir))
# see if we should sort the searchresults in some special order
# this requires an arg on the commandline of the form:
# '<sortkey>-<asc|desc>'
# where sortkey is some key in the checkplot pickle:
# this can be a simple key: e.g. objectid
# or it can be a composite key: e.g. varinfo.varfeatures.stetsonj
# and sortorder is either 'asc' or desc' for ascending/descending sort
# we only support a single condition conditions are of the form:
# '<filterkey>-<condition>@<operand>' where <condition> is one of: 'ge',
# 'gt', 'le', 'lt', 'eq' and <operand> is a string, float, or int to use
# when applying <condition>
# first, take care of sort keys
sortdone = False
# second, take care of any filters
filterok = False
filterstatements = []
# make sure we only run these operations on checkplot pickles
if ((args.cptype == 'pkl') and
((sortkey and sortorder) or (filterkeys and filterconditions))):
keystoget = []
# handle sorting
if (sortkey and sortorder):
print('sorting checkplot pickles by %s in order: %s' %
(sortkey, sortorder))
# dereference the sort key
sortkeys = sortkey.split('.')
# if there are any integers in the sortkeys strings, interpret
# these to mean actual integer indexes of lists or integer keys
# for dicts this allows us to move into arrays easily by
# indexing them
if sys.version_info[:2] < (3,4):
sortkeys = [(int(x) if x.isdigit() else x)
for x in sortkeys]
else:
sortkeys = [(int(x) if x.isdecimal() else x)
for x in sortkeys]
keystoget.append(sortkeys)
# handle filtering
if (filterkeys and filterconditions):
print('filtering checkplot pickles by %s using: %s' %
(filterkeys, filterconditions))
# add all the filtkeys to the list of keys to get
for fdk in filterkeys:
# dereference the filter dict key
fdictkeys = fdk.split('.')
fdictkeys = [(int(x) if x.isdecimal() else x)
for x in fdictkeys]
keystoget.append(fdictkeys)
print('retrieving checkplot info using %s workers...'
% args.maxkeyworkers)
# launch the key retrieval
pool = mp.Pool(args.maxkeyworkers)
tasks = [(x, keystoget) for x in searchresults]
keytargets = pool.map(checkplot_infokey_worker, tasks)
pool.close()
pool.join()
# now that we have keys, we need to use them
# keys will be returned in the order we put them into keystoget
# if keystoget is more than 1 element, then it's either sorting
# followed by filtering (multiple)...
if (len(keystoget) > 1 and
(sortkey and sortorder) and
(filterkeys and filterconditions)):
# the first elem is sort key targets
sorttargets = [x[0] for x in keytargets]
# all of the rest are filter targets
filtertargets = [x[1:] for x in keytargets]
# otherwise, it's just multiple filters
elif (len(keystoget) > 1 and
(not (sortkey and sortorder)) and
(filterkeys and filterconditions)):
sorttargets = None
filtertargets = keytargets
# if there's only one element in keytoget, then it's either just a
# sort target...
elif (len(keystoget) == 1 and
(sortkey and sortorder) and
(not(filterkeys and filterconditions))):
sorttargets = keytargets
filtertargets = None
# or it's just a filter target
elif (len(keystoget) == 1 and
(filterkeys and filterconditions) and
(not(sortkey and sortorder))):
sorttargets = None
filtertargets = keytargets
# turn the search results into an np.array before we do
# sorting/filtering
searchresults = np.array(searchresults)
if sorttargets:
sorttargets = np.ravel(np.array(sorttargets))
sortind = np.argsort(sorttargets)
if sortorder == 'desc':
sortind = sortind[::-1]
# sort the search results in the requested order
searchresults = searchresults[sortind]
sortdone = True
if filtertargets:
# don't forget to also sort the filtertargets in the same order
# as sorttargets so we can get the correct objects to filter.
# now figure out the filter conditions: <condition>@<operand>
# where <condition> is one of: 'ge', 'gt', 'le', 'lt', 'eq' and
# <operand> is a string, float, or int to use when applying
# <condition>
finalfilterind = []
for ind, fcond in enumerate(filterconditions):
thisftarget = np.array([x[ind] for x in filtertargets])
if (sortdone):
thisftarget = thisftarget[sortind]
try:
foperator, foperand = fcond.split('@')
foperator = FILTEROPS[foperator]
# we'll do a straight eval of the filter
# yes: this is unsafe
filterstr = (
'np.isfinite(thisftarget) & (thisftarget %s %s)' %
(foperator, foperand)
)
filterind = eval(filterstr)
# add this filter to the finalfilterind
finalfilterind.append(filterind)
# update the filterstatements
filterstatements.append('%s %s %s' % (filterkeys[ind],
foperator,
foperand))
except Exception as e:
print('ERR! could not understand filter spec: %s'
'\nexception was: %s' %
(args.filterby[ind], e))
print('WRN! not applying broken filter')
#
# DONE with evaluating each filter, get final results below
#
# column stack the overall filter ind
finalfilterind = np.column_stack(finalfilterind)
# do a logical AND across the rows
finalfilterind = np.all(finalfilterind, axis=1)
# these are the final results after ANDing all the filters
filterresults = searchresults[finalfilterind]
# make sure we got some results
if filterresults.size > 0:
print('filters applied: %s -> objects found: %s ' %
(repr(args.filterby), filterresults.size))
searchresults = filterresults
filterok = True
# otherwise, applying all of the filters killed everything
else:
print('WRN! filtering failed! %s -> ZERO objects found!' %
(repr(args.filterby), ))
print('WRN! not applying any filters')
# all done with sorting and filtering
# turn the searchresults back into a list
searchresults = searchresults.tolist()
# if there's no special sort order defined, use the usual sort order
# at the end after filtering
if not(sortkey and sortorder):
print('WRN! no special sort key and order/'
'filter key and condition specified, '
'sorting checkplot pickles '
'using usual alphanumeric sort...')
searchresults = sorted(searchresults)
sortkey = 'filename'
sortorder = 'asc'
nchunks = int(len(searchresults)/splitout) + 1
searchchunks = [searchresults[x*splitout:x*splitout+splitout] for x
in range(nchunks)]
if nchunks > 1:
print('WRN! more than %s checkplots in final list, '
'splitting into %s chunks' % (splitout, nchunks))
# if the filter failed, zero out filterkey
if (filterkeys and filterconditions) and not filterok:
filterstatements = []
# generate the output
for chunkind, chunk in enumerate(searchchunks):
# figure out if we need to split the JSON file
outjson = os.path.abspath(
os.path.join(
currdir,
'%scheckplot-filelist%s.json' % (
('%s-' % outprefix if outprefix is not None else ''),
('-%02i' % chunkind if len(searchchunks) > 1 else ''),
)
)
)
outjson = outjson.replace('|','_')
outjson = outjson.replace('@','_')
# ask if the checkplot list JSON should be updated
if os.path.exists(outjson):
if sys.version_info[:2] < (3,0):
answer = raw_input(
'There is an existing '
'checkplot list file in this '
'directory:\n %s\nDo you want to '
'overwrite it completely? (default: no) [y/n] ' %
outjson
)
else:
answer = input(
'There is an existing '
'checkplot list file in this '
'directory:\n %s\nDo you want to '
'overwrite it completely? (default: no) [y/n] ' %
outjson
)
# if it's OK to overwrite, then do so
if answer and answer == 'y':
with open(outjson,'w') as outfd:
print('WRN! completely overwriting '
'existing checkplot list %s' % outjson)
outdict = {
'checkplots':chunk,
'nfiles':len(chunk),
'sortkey':sortkey,
'sortorder':sortorder,
'filterstatements':filterstatements
}
json.dump(outdict,outfd)
# if it's not OK to overwrite, then
else:
# read in the outjson, and add stuff to it for objects that
# don't have an entry
print('only updating existing checkplot list '
'file with any new checkplot pickles')
with open(outjson,'r') as infd:
indict = json.load(infd)
# update the checkplot list, sortorder, and sortkey only
indict['checkplots'] = chunk
indict['nfiles'] = len(chunk)
indict['sortkey'] = sortkey
indict['sortorder'] = sortorder
indict['filterstatements'] = filterstatements
# write the updated to back to the file
with open(outjson,'w') as outfd:
json.dump(indict, outfd)
# if this is a new output file
else:
with open(outjson,'w') as outfd:
outdict = {'checkplots':chunk,
'nfiles':len(chunk),
'sortkey':sortkey,
'sortorder':sortorder,
'filterstatements':filterstatements}
json.dump(outdict,outfd)
if os.path.exists(outjson):
print('checkplot file list written to %s' % outjson)
else:
print('ERR! writing the checkplot file list failed!')
else:
print('ERR! no checkplots found in %s' % checkplotbasedir)
|
def _gaussian(x, amp, loc, std):
'''This is a simple gaussian.
Parameters
----------
x : np.array
The items at which the Gaussian is evaluated.
amp : float
The amplitude of the Gaussian.
loc : float
The central value of the Gaussian.
std : float
The standard deviation of the Gaussian.
Returns
-------
np.array
Returns the Gaussian evaluated at the items in `x`, using the provided
parameters of `amp`, `loc`, and `std`.
'''
return amp * np.exp(-((x - loc)*(x - loc))/(2.0*std*std))
|
def _double_inverted_gaussian(x,
amp1, loc1, std1,
amp2, loc2, std2):
'''This is a double inverted gaussian.
Parameters
----------
x : np.array
The items at which the Gaussian is evaluated.
amp1,amp2 : float
The amplitude of Gaussian 1 and Gaussian 2.
loc1,loc2 : float
The central value of Gaussian 1 and Gaussian 2.
std1,std2 : float
The standard deviation of Gaussian 1 and Gaussian 2.
Returns
-------
np.array
Returns a double inverted Gaussian function evaluated at the items in
`x`, using the provided parameters of `amp`, `loc`, and `std` for two
component Gaussians 1 and 2.
'''
gaussian1 = -_gaussian(x,amp1,loc1,std1)
gaussian2 = -_gaussian(x,amp2,loc2,std2)
return gaussian1 + gaussian2
|
def invgauss_eclipses_func(ebparams, times, mags, errs):
'''This returns a double eclipse shaped function.
Suitable for first order modeling of eclipsing binaries.
Parameters
----------
ebparams : list of float
This contains the parameters for the eclipsing binary::
ebparams = [period (time),
epoch (time),
pdepth: primary eclipse depth (mags),
pduration: primary eclipse duration (phase),
psdepthratio: primary-secondary eclipse depth ratio,
secondaryphase: center phase of the secondary eclipse]
`period` is the period in days.
`epoch` is the time of minimum in JD.
`pdepth` is the depth of the primary eclipse.
- for magnitudes -> pdepth should be < 0
- for fluxes -> pdepth should be > 0
`pduration` is the length of the primary eclipse in phase.
`psdepthratio` is the ratio in the eclipse depths:
`depth_secondary/depth_primary`. This is generally the same as the ratio
of the `T_effs` of the two stars.
`secondaryphase` is the phase at which the minimum of the secondary
eclipse is located. This effectively parameterizes eccentricity.
All of these will then have fitted values after the fit is done.
times,mags,errs : np.array
The input time-series of measurements and associated errors for which
the eclipse model will be generated. The times will be used to generate
model mags, and the input `times`, `mags`, and `errs` will be resorted
by model phase and returned.
Returns
-------
(modelmags, phase, ptimes, pmags, perrs) : tuple
Returns the model mags and phase values. Also returns the input `times`,
`mags`, and `errs` sorted by the model's phase.
'''
(period, epoch, pdepth, pduration, depthratio, secondaryphase) = ebparams
# generate the phases
iphase = (times - epoch)/period
iphase = iphase - np.floor(iphase)
phasesortind = np.argsort(iphase)
phase = iphase[phasesortind]
ptimes = times[phasesortind]
pmags = mags[phasesortind]
perrs = errs[phasesortind]
zerolevel = np.median(pmags)
modelmags = np.full_like(phase, zerolevel)
primaryecl_amp = -pdepth
secondaryecl_amp = -pdepth * depthratio
primaryecl_std = pduration/5.0 # we use 5-sigma as full-width -> duration
secondaryecl_std = pduration/5.0 # secondary eclipse has the same duration
halfduration = pduration/2.0
# phase indices
primary_eclipse_ingress = (
(phase >= (1.0 - halfduration)) & (phase <= 1.0)
)
primary_eclipse_egress = (
(phase >= 0.0) & (phase <= halfduration)
)
secondary_eclipse_phase = (
(phase >= (secondaryphase - halfduration)) &
(phase <= (secondaryphase + halfduration))
)
# put in the eclipses
modelmags[primary_eclipse_ingress] = (
zerolevel + _gaussian(phase[primary_eclipse_ingress],
primaryecl_amp,
1.0,
primaryecl_std)
)
modelmags[primary_eclipse_egress] = (
zerolevel + _gaussian(phase[primary_eclipse_egress],
primaryecl_amp,
0.0,
primaryecl_std)
)
modelmags[secondary_eclipse_phase] = (
zerolevel + _gaussian(phase[secondary_eclipse_phase],
secondaryecl_amp,
secondaryphase,
secondaryecl_std)
)
return modelmags, phase, ptimes, pmags, perrs
|
def invgauss_eclipses_residual(ebparams, times, mags, errs):
'''This returns the residual between the modelmags and the actual mags.
Parameters
----------
ebparams : list of float
This contains the parameters for the eclipsing binary::
ebparams = [period (time),
epoch (time),
pdepth: primary eclipse depth (mags),
pduration: primary eclipse duration (phase),
psdepthratio: primary-secondary eclipse depth ratio,
secondaryphase: center phase of the secondary eclipse]
`period` is the period in days.
`epoch` is the time of minimum in JD.
`pdepth` is the depth of the primary eclipse.
- for magnitudes -> `pdepth` should be < 0
- for fluxes -> `pdepth` should be > 0
`pduration` is the length of the primary eclipse in phase.
`psdepthratio` is the ratio in the eclipse depths:
`depth_secondary/depth_primary`. This is generally the same as the ratio
of the `T_effs` of the two stars.
`secondaryphase` is the phase at which the minimum of the secondary
eclipse is located. This effectively parameterizes eccentricity.
All of these will then have fitted values after the fit is done.
times,mags,errs : np.array
The input time-series of measurements and associated errors for which
the eclipse model will be generated. The times will be used to generate
model mags, and the input `times`, `mags`, and `errs` will be resorted
by model phase and returned.
Returns
-------
np.array
The residuals between the input `mags` and generated `modelmags`,
weighted by the measurement errors in `errs`.
'''
modelmags, phase, ptimes, pmags, perrs = (
invgauss_eclipses_func(ebparams, times, mags, errs)
)
# this is now a weighted residual taking into account the measurement err
return (pmags - modelmags)/perrs
|
def convert_constants(jmag, hmag, kmag,
cjhk,
cjh, cjk, chk,
cj, ch, ck):
'''This converts between JHK and BVRI/SDSS mags.
Not meant to be used directly. See the functions below for more sensible
interface. This function does the grunt work of converting from JHK to
either BVRI or SDSS ugriz. while taking care of missing values for any of
jmag, hmag, or kmag.
Parameters
----------
jmag,hmag,kmag : float
2MASS J, H, Ks mags to use to convert.
cjhk,cjh,cjk,chk,cj,ch,ck : lists
Constants to use when converting.
Returns
-------
float
The converted magnitude in SDSS or BVRI system.
'''
if jmag is not None:
if hmag is not None:
if kmag is not None:
return cjhk[0] + cjhk[1]*jmag + cjhk[2]*hmag + cjhk[3]*kmag
else:
return cjh[0] + cjh[1]*jmag + cjh[2]*hmag
else:
if kmag is not None:
return cjk[0] + cjk[1]*jmag + cjk[2]*kmag
else:
return cj[0] + cj[1]*jmag
else:
if hmag is not None:
if kmag is not None:
return chk[0] + chk[1]*hmag + chk[2]*kmag
else:
return ch[0] + ch[1]*hmag
else:
if kmag is not None:
return ck[0] + ck[1]*kmag
else:
return np.nan
|
def jhk_to_bmag(jmag, hmag, kmag):
'''Converts given J, H, Ks mags to a B magnitude value.
Parameters
----------
jmag,hmag,kmag : float
2MASS J, H, Ks mags of the object.
Returns
-------
float
The converted B band magnitude.
'''
return convert_constants(jmag,hmag,kmag,
BJHK,
BJH, BJK, BHK,
BJ, BH, BK)
|
def jhk_to_vmag(jmag,hmag,kmag):
'''Converts given J, H, Ks mags to a V magnitude value.
Parameters
----------
jmag,hmag,kmag : float
2MASS J, H, Ks mags of the object.
Returns
-------
float
The converted V band magnitude.
'''
return convert_constants(jmag,hmag,kmag,
VJHK,
VJH, VJK, VHK,
VJ, VH, VK)
|
def jhk_to_rmag(jmag,hmag,kmag):
'''Converts given J, H, Ks mags to an R magnitude value.
Parameters
----------
jmag,hmag,kmag : float
2MASS J, H, Ks mags of the object.
Returns
-------
float
The converted R band magnitude.
'''
return convert_constants(jmag,hmag,kmag,
RJHK,
RJH, RJK, RHK,
RJ, RH, RK)
|
def jhk_to_imag(jmag,hmag,kmag):
'''Converts given J, H, Ks mags to an I magnitude value.
Parameters
----------
jmag,hmag,kmag : float
2MASS J, H, Ks mags of the object.
Returns
-------
float
The converted I band magnitude.
'''
return convert_constants(jmag,hmag,kmag,
IJHK,
IJH, IJK, IHK,
IJ, IH, IK)
|
def jhk_to_sdssu(jmag,hmag,kmag):
'''Converts given J, H, Ks mags to an SDSS u magnitude value.
Parameters
----------
jmag,hmag,kmag : float
2MASS J, H, Ks mags of the object.
Returns
-------
float
The converted SDSS u band magnitude.
'''
return convert_constants(jmag,hmag,kmag,
SDSSU_JHK,
SDSSU_JH, SDSSU_JK, SDSSU_HK,
SDSSU_J, SDSSU_H, SDSSU_K)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.