Download the single beam data from SPARKESX > Files > sbeam > all folders.
- We use an open source implementation of kernel logistic regression (klr) https://github.com/RussellTsuchida/klr. Unzip this directory, cd into it and install using
python -m pip install .
- Install the rest of the dependencies using
python -m pip install -r requirements.txt
. - Depending on your setup, you may wish to run the code differently.
- We ran our expierments on a cluster that uses the SLURM job manager. To run code using SLURM, point to the location of the data on line 14 of
run.sh
. Then ussbatch run.sh
. - If you want to manage individual python instances separately for each
.sf
file, you can runpython runalgo_klr.py FNAME
, where FNAME is the absolute path to the.sf
file.
The verbosity of results that are saved is controlled by the VERBOSITY parameter on line 24 of runalgo_klr.py
.
- If VERBOSITY <= 0. Results are written to a
scores.npy
file for each.sf
file. These scores represent the power of each chunk as a function of time, and are also plotted inscores.png
. - If VERBOSITY <= 1. The function f of the RKHS (i.e. the logits of the bernoulli distribution) are plotted for each chunk of data. These are called
_t.png
, wheret
is the chunk index. - If VERBOSITY <= 2. The raw data itself is plotted in
t.png
, wheret
is the chunk index.
- The file
post_process.py
and associated SLURM scriptpost_process.sh
put the scores in apredictions.csv
file. This file associates to every time chunk a 0 or a 1, depending on whether an event is observed or not. - Given such post-processed
predictions.csv
files,collate_results.py
produces the figures given in the paper.