This codebase is dedicated to exploring different self-supervised pretraining methods. We integrate multimodal data from supernovae light curves with images of their host galaxies. Our goal is to leverage diverse data types to improve the prediction and understanding of astronomical phenomena.
An overview over the the CLIP method and loss link
Before installing, ensure you have the following prerequisites:
- Python 3.8 or higher
- pip package manager
-
Clone the repository to your local machine and navigate into the directory:
git clone [email protected]:ThomasHelfer/Multimodal-hackathon-2024.git cd Multimodal-hackathon-2024.git
-
Unpack the dataset containing supernovae spectra, light curves and host galaxy images:
unzip data/ZTFBTS.zip unzip data/ZTFBTS_spectra.zip
for larger simulation-based data download using wget
mkdir sim_data cd sim_data wget https://zenodo.org/records/6601211/files/scotch_z3.hdf5
-
Install all dependencies listed in the requirements.txt file:
pip install -r requirements.txt
-
Execute the main script to start your analysis:
python script.py --config_path configs/config.yaml
-
To start off from a checkpoint use
python script.py --ckpt_path analysis/batch_sweep/worldly-sweep-4/model.ckpt
where there should be a config.yaml corresponing to this run in the same folder
- Sign up for an account at Weights & Biases if you haven't already.
- Edit the configuration file to specify your project name. Ensure the name matches the project you create on wand.ai. You can define sweep parameters within the config file .
-
In the config file you can choose
if true, script_wandb.py performs a regression for redshift. Similarly for
extra_args regression: True
if true, script_wandb.py performs a classification. if neither are true, it will perform a normal clip pretraining. Lastly, forextra_args classification: True
preloads a pretrained model in script_wandb.py or allows to restart a run from a checkpoint for retraining_wandb.pyextra_args pretrain_lc_path: 'path_to_checkpoint/checkpoint.ckpt' freeze_backbone_lc: True
-
Start the hyperparameter sweep with the following command:
Resume a sweep with the following command:
python script_wandb.py configs/config_grid.yaml
or for pretraining the lightcurve encoder please use:python script_wandb.py [sweep_id]
python pretraining_wandb.py configs/config_grid.yaml
-
The first execution will prompt you for your Weights & Biases API key, which can be found here.
Alternatively, you can set your API key as an environment variable, especially if running on a compute node:
export WANDB_API_KEY=...
- Monitor and analyze your experiment results on your Weights & Biases project page. wand.ai
We can run a k-fold cross validation by defining the variable
extra_args:
kfolds: 5 # for strat Crossvaildation
as this can take serially very long, one can choose to split your runs for different submission by just choosing certain folds for each submission
foldnumber:
values: [1,2,3]