-
Bytedance, Inc
- Haidian, Beijing
-
00:27
(UTC +08:00) - yxinyu.com
- @yxinyu715
Highlights
Lists (2)
Sort Name ascending (A-Z)
Starred repositories
All Algorithms implemented in Python
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
《动手学深度学习》:面向中文读者、能运行、可讨论。中英文版被70多个国家的500多所大学用于教学。
A high-throughput and memory-efficient inference and serving engine for LLMs
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x faster with 70% less VRAM.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Official Code for DragGAN (SIGGRAPH 2023)
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
🌈Python3网络爬虫实战:淘宝、京东、网易云、B站、12306、抖音、笔趣阁、漫画小说下载、音乐电影下载等
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
verl: Volcano Engine Reinforcement Learning for LLMs
Large Language Model Text Generation Inference
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
A framework for few-shot evaluation of language models.
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
A faster pytorch implementation of faster r-cnn
Chinese version of GPT2 training code, using BERT tokenizer.
Home of StarCoder: fine-tuning & inference!
An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & vLLM & Ray & Dynamic Sampling & Async Agentic RL)
A debugging and profiling tool that can trace and visualize python code execution
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
Modeling, training, eval, and inference code for OLMo
Efficient Triton Kernels for LLM Training