-
Tsinghua University
- Peking, China
-
02:50
(UTC +08:00)
Highlights
Stars
A Framework of Small-scale Large Multimodal Models
Large Language Model (LLM) Systems Paper List
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
A parallelism VAE avoids OOM for high resolution image generation
KnowLA: Enhancing Parameter-efficient Finetuning with Knowledgeable Adaptation, NAACL 2024
[Paper List] Papers integrating knowledge graphs (KGs) and large language models (LLMs)
🔥中文 prompt 精选🔥,ChatGPT 使用指南,提升 ChatGPT 可玩性和可用性!🚀
RUCAIBox / LC-Rec
Forked from zhengbw0324/LC-Rec[ICDE'24] Code of "Adapting Large Language Models by Integrating Collaborative Semantics for Recommendation."
RayLLM - LLMs on Ray (Archived). Read README for more info.
hyuenmin-choi / splitwise-sim
Forked from mutinifni/splitwise-simLLM serving cluster simulator
A collection of AWESOME things about mixture-of-experts
GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM
📰 Must-read papers and blogs on Speculative Decoding ⚡️
[ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache
A plug-and-play tool for visualizing attention-score heatmap in generative LLMs. Easy to customize for your own need.
llm theoretical performance analysis tools and support params, flops, memory and latency analysis.
Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)
Development repository for the Triton language and compiler
📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉
Make huge neural nets fit in memory
小红书笔记 | 评论爬虫、抖音视频 | 评论爬虫、快手视频 | 评论爬虫、B 站视频 | 评论爬虫、微博帖子 | 评论爬虫、百度贴吧帖子 | 百度贴吧评论回复爬虫 | 知乎问答文章|评论爬虫
An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.
minimal scripts for 24GB VRAM GPUs. training, inference, whatever
Large Language Model Text Generation Inference
Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models
欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。