-
Peking University
Stars
fabric is an open-source framework for augmenting humans using AI. It provides a modular framework for solving specific problems using a crowdsourced set of AI prompts that can be used anywhere.
The official repository of our survey paper: "Towards a Unified View of Preference Learning for Large Language Models: A Survey"
Awesome-llm-role-playing-with-persona: a curated list of resources for large language models for role-playing with assigned personas
Awesome papers for role-playing with language models
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
Methods and evaluation for aligning language models temporally
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。
A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights i…
DeepSeek Coder: Let the Code Write Itself
Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sourced the model, training data, evaluation data, evaluation me…
Official inference library for Mistral models
A high-throughput and memory-efficient inference and serving engine for LLMs
Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment
✨✨Latest Advances on Multimodal Large Language Models
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等