Skip to content

In my thesis I want to investigate the effectiveness of different manifold manipulation techniques on the performance of transformer based time series forecasting methods. Transformer models can work with time series , however, those sequence-to-sequence models rely on neural attention between timesteps, which allows for temporal learning but fails

Notifications You must be signed in to change notification settings

githubtpx/master-thesis

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dataugmentaion for Transformer based Multivariate Time Series Forcasting

Code for my masters thesis where I will investigate the impact of manifold mixup data augmentation has on model performance.
Architetcure

Multivariate Time Series (MTS) forecasting plays a vital role in a wide range of applications. Manifold Mixup describes a technique where data is augmented in the manifold of the nwtwork, instead at the input. Since the manifold in trained neuiral networks represents a compressed representaion of the input information, interploating between samples can yield new, augmented samples, that during finetuning can sharpen the decision boundires. In this thesis I propose to use the same technique that has been showen to be efficient in autoencoders and the BERT transforme encoder architecture in a MTS transformer encoder-decoder architecture.

📚 Table of Contents

data

Unidentifiable data regarding patiens on the Neurological Intensive Care Unit (ICU). The raw data cannot be accessed.

data_air

Beijing Multi-Site Air-Quality Data. To test the methodology of the manifold mixup, we first try it out on a easier interpretable dataset, then the ICU patient data

figures

Different figures (architecture, performance, eda)

💿 Requirements

The code is built based on Python 3.9, PyTorch 1.10.0

After ensuring that PyTorch is installed correctly, you can install other dependencies via:

pip install -r requirements.txt

📑 Methods

I will use the basic transformer architeture as a baseline. I will compare the Manifild Mixup technique to the spaceitimeformer. The spacetimeformer is a specifal embedding methodology that alows to capture not only temporal information between time steps, but a compination of temporal and spatial information. Additionally, I will combine the Manifold Mixup and Spacetimeformer architecture.

  • Transformer
  • Transformer + Manifold Mixup
  • Spacetimeformer
  • Spacetimeformer + Manifold Mixup

Since time-series data is not easily interpretable by humans, I will use PCA and t-SNE to map the multi-dimensional output sequence vectors into two dimensions to visually observe the similarity in the distribution of the synthetic data and real data instances.

📊 Performance and Visualization

About

In my thesis I want to investigate the effectiveness of different manifold manipulation techniques on the performance of transformer based time series forecasting methods. Transformer models can work with time series , however, those sequence-to-sequence models rely on neural attention between timesteps, which allows for temporal learning but fails

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 96.5%
  • Python 3.5%