Stars
Research Code for MOMENT
GraphAlign: Pretraining One Graph Neural Network on Multiple Graphs via Feature Alignment
欢迎来到 LLM-Dojo,这里是一个开源大模型学习场所,使用简洁且易阅读的代码构建模型训练框架(支持各种主流模型如Qwen、Llama、GLM等等)、RLHF框架(DPO/CPO/KTO/PPO)等各种功能。👩🎓👨🎓
mohamedr002 / Practical-Time-Series-In-Python
Forked from youssefHosni/Practical-Time-Series-In-PythonPractical guidance for time series analysis in Python
A curated list of foundation models for vision and language tasks
Unified Training of Universal Time Series Forecasting Transformers
The collection of resources about LLM for Time series tasks
yixinliu233 / SUBLIME
Forked from TrustAGI-Lab/SUBLIME[WWW'22] Towards Unsupervised Deep Graph Structure Learning
A comprehensive benchmark of Graph Structure Learning (NeurIPS 2023 Datasets and Benchmarks Track)
A curated list of papers on graph structure learning (GSL).
Pytorch Lightning入门中文教程,转载请注明来源。(当初是写着玩的,建议看完MNIST这个例子再上手)
MOMENT: A Family of Open Time-series Foundation Models
This repository contains the code for our proposed method "Multi-source Feature Alignment and Label Rectification" (MFA-LR), which has been published in the paper "Learning a Robust Unified Domain …
PyTorch implementation of DANN (Domain-Adversarial Training of Neural Networks)
This project extends the idea of the innovative architecture of Kolmogorov-Arnold Networks (KAN) to the Convolutional Layers, changing the classic linear transformation of the convolution to learna…
A Fair and Scalable Time Series Forecasting Benchmark and Toolkit.
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
[ICML 2024] A novel, efficient approach combining convolutional operations with adaptive spectral analysis as a foundation model for different time series tasks
GraphMAE: Self-Supervised Masked Graph Autoencoders in KDD'22
Official implementation of Evidential Uncertainty Quantification: A Variance-Based Perspective [WACV 2024]
[TPAMI 2024] The official implementation of "Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering Regularized Self-Training"
tracking papers, datasets, and models of "large language model (LLM) for time series"