-
University of Wisconsin-Madison
Highlights
- Pro
Starred repositories
A multi-choice benchmark to evaluate LLM performance on PyChrono API usage
This is an official implementation of "DeformableTST: Transformer for Time Series Forecasting without Over-reliance on Patching" (NeurIPS 2024)
Multivariate Time Series Transformer, public version
Official repository for "TimePFN: Effective Multivariate Time Series Forecasting with Synthetic Data" (AAAI 2025).
A Library for Advanced Deep Time Series Models.
Scalable and user friendly neural 🧠 forecasting algorithms.
The Electricity Transformer dataset is collected to support the further investigation on the long sequence forecasting problem.
A professionally curated list of awesome resources (paper, code, data, etc.) on transformers in time series.
Foundation Models for Time Series
The GitHub repository for the paper "Informer" accepted by AAAI 2021.
An offical implementation of PatchTST: "A Time Series is Worth 64 Words: Long-term Forecasting with Transformers." (ICLR 2023) https://arxiv.org/abs/2211.14730
Chronos: Pretrained Models for Probabilistic Time Series Forecasting
R1-Code-Interpreter: Training LLMs to Reason with Code via Supervised and Reinforcement Learning
Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving
Scalable toolkit for efficient model reinforcement
A powerful tool for creating fine-tuning datasets for LLM
EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL
verl: Volcano Engine Reinforcement Learning for LLMs
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x faster with 70% less VRAM.
ModelScope: bring the notion of Model-as-a-Service to life.
Fully open reproduction of DeepSeek-R1
机器学习工程师、算法工程师、软件工程师、数据科学家-面试指南 | Interview guide for MLE, SDE, DS
Agent Laboratory is an end-to-end autonomous research workflow meant to assist you as the human researcher toward implementing your research ideas
New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)
🟣 LLMs interview questions and answers to help you prepare for your next machine learning and data science interview in 2025.