Package for training and evaluating time-series foundational models.
Current repository contains the following models:
More models will be added soon...
You can add the package to your project by running the following command:
pip install git+https://github.com/AdityaLab/Samay.git
For linux users with CUDA installed, you can install the package with GPU support by running:
pip install https://github.com/SamayAI/Samay/releases/download/v0.1.0/samay-0.1.0-cp311-cp311-linux_x86_64.whl
Note: If the installation fails because rust is missing run:
For MacOS:
brew install rustup
rustup-init
source ~/.cargo/env
For Linux:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
To develop on the project, you can clone the repository and install the package in editable mode:
## Clone repo
git clone https://github.com/AdityaLab/Samay.git
## Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
## Install dependencies
uv sync --reinstall
Check out example notebooks at examples/
for more detailed examples. We also have google colab notebooks at examples/colab/
.
from samay.model import LPTMModel
config = {
"task_name": "forecasting",
"forecast_horizon": 192,
"freeze_encoder": True, # Freeze the patch embedding layer
"freeze_embedder": True, # Freeze the transformer encoder
"freeze_head": False, # The linear forecasting head must be trained
}
model = LPTMModel(config)
from samay.dataset import LPTMDataset
train_dataset = LPTMDataset(
name="ett",
datetime_col="date",
path="./data/data/ETTh1.csv",
mode="train",
horizon=192,
)
finetuned_model = model.finetune(train_dataset)
avg_loss, trues, preds, histories = lptm.evaluate(val_dataset)
from samay.model import TimesfmModel
from samay.dataset import TimesfmDataset
repo = "google/timesfm-1.0-200m-pytorch"
config = {
"context_len": 512,
"horizon_len": 192,
"backend": "gpu",
"per_core_batch_size": 32,
"input_patch_len": 32,
"output_patch_len": 128,
"num_layers": 20,
"model_dims": 1280,
"quantiles": [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
}
tfm = TimesfmModel(config=config, repo=repo)
train_dataset = TimesfmDataset(name="ett", datetime_col='date', path='data/ETTh1.csv',
mode='train', context_len=config["context_len"], horizon_len=128)
val_dataset = TimesfmDataset(name="ett", datetime_col='date', path='data/ETTh1.csv',
mode='test', context_len=config["context_len"], horizon_len=config["horizon_len"])
avg_loss, trues, preds, histories = tfm.evaluate(val_dataset)
Tested on Python 3.11-3.13 on Linux (CPU + GPU) and MacOS (CPU). Supports NVIDIA GPUs. Support for Windows and Apple Silicon GPUs is planned.
If you use this code in your research, please cite the following paper:
@inproceedings{
kamarthi2024large,
title={Large Pre-trained time series models for cross-domain Time series analysis tasks},
author={Harshavardhan Kamarthi and B. Aditya Prakash},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vMMzjCr5Zj}
}
If you have any feedback or questions, you can contact us via email: hkamarthi3@gatech.edu, badityap@cc.gatech.edu.