Skip to content

A codebase dedicated to exploring multimodal learning approaches by integrating images of host galaxies of supernovae and their corresponding light-curves and spectra.

License

Notifications You must be signed in to change notification settings

Javk5pakfa/multimodal-supernovae

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multimodality with supernovae

License: MIT Unittest

no alignment

Overview

This codebase is dedicated to exploring different self-supervised pretraining methods. We integrate multimodal data from supernovae light curves with images of their host galaxies. Our goal is to leverage diverse data types to improve the prediction and understanding of astronomical phenomena.

An overview over the the CLIP method and loss link

Installation

Prerequisites

Before installing, ensure you have the following prerequisites:

  • Python 3.8 or higher
  • pip package manager

Steps

  1. Clone the Repository

    Clone the repository to your local machine and navigate into the directory:

    git clone [email protected]:ThomasHelfer/Multimodal-hackathon-2024.git
    cd Multimodal-hackathon-2024.git
  2. Get data

    Unpack the dataset containing supernovae spectra, light curves and host galaxy images:

    unzip data/ZTFBTS.zip
    unzip data/ZTFBTS_spectra.zip   

    for larger simulation-based data download using wget

    mkdir sim_data
    cd sim_data
    wget https://zenodo.org/records/6601211/files/scotch_z3.hdf5
  3. Install Required Python Packages

    Install all dependencies listed in the requirements.txt file:

    pip install -r requirements.txt 
  4. Run the Script

    Execute the main script to start your analysis:

    python script.py --config_path configs/config.yaml
  5. Restart script

    To start off from a checkpoint use

    python script.py --ckpt_path analysis/batch_sweep/worldly-sweep-4/model.ckpt

    where there should be a config.yaml corresponing to this run in the same folder

Setting Up a Hyperparameter Scan with Weights & Biases

  1. Create a Weights & Biases Account

    Sign up for an account at Weights & Biases if you haven't already.
  2. Configure Your Project

    Edit the configuration file to specify your project name. Ensure the name matches the project you create on wand.ai. You can define sweep parameters within the config file .
  3. Choose important parameters

    In the config file you can choose
    extra_args
      regression: True
    if true, script_wandb.py performs a regression for redshift. Similarly for
    extra_args
      classification: True
    if true, script_wandb.py performs a classification. if neither are true, it will perform a normal clip pretraining. Lastly, for
    extra_args
      pretrain_lc_path: 'path_to_checkpoint/checkpoint.ckpt'
      freeze_backbone_lc: True
    preloads a pretrained model in script_wandb.py or allows to restart a run from a checkpoint for retraining_wandb.py
  4. Run the Sweep Script

    Start the hyperparameter sweep with the following command:
    python script_wandb.py configs/config_grid.yaml 
    Resume a sweep with the following command:
    python script_wandb.py [sweep_id]
    or for pretraining the lightcurve encoder please use:
    python pretraining_wandb.py configs/config_grid.yaml
  5. API Key Configuration

    The first execution will prompt you for your Weights & Biases API key, which can be found here. Alternatively, you can set your API key as an environment variable, especially if running on a compute node:
    export WANDB_API_KEY=...
  6. View Results

    Monitor and analyze your experiment results on your Weights & Biases project page. wand.ai

Running a k-fold cross-validation

We can run a k-fold cross validation by defining the variable

 extra_args:
   kfolds: 5 # for strat Crossvaildation

as this can take serially very long, one can choose to split your runs for different submission by just choosing certain folds for each submission

   foldnumber:
     values: [1,2,3]

About

A codebase dedicated to exploring multimodal learning approaches by integrating images of host galaxies of supernovae and their corresponding light-curves and spectra.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 84.6%
  • Python 15.3%
  • Shell 0.1%