Highlights
- Pro
Lists (5)
Sort Name ascending (A-Z)
Starred repositories
【LLMs九层妖塔】分享 LLMs在自然语言处理(ChatGLM、Chinese-LLaMA-Alpaca、小羊驼 Vicuna、LLaMA、GPT4ALL等)、信息检索(langchain)、语言合成、语言识别、多模态等领域(Stable Diffusion、MiniGPT-4、VisualGLM-6B、Ziya-Visual等)等 实战与经验。
This repository offers a comprehensive collection of tutorials and implementations for Prompt Engineering techniques, ranging from fundamental concepts to advanced strategies. It serves as an essen…
This repository showcases various advanced techniques for Retrieval-Augmented Generation (RAG) systems. RAG systems combine information retrieval with generative models to provide accurate and cont…
A simple, easy-to-hack GraphRAG implementation
AdalFlow: The library to build & auto-optimize LLM applications.
⚡FlashRAG: A Python Toolkit for Efficient RAG Research (WWW2025 Resource)
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2
Retrieval Augmented Generation (RAG) chatbot powered by Weaviate
RAG-GPT, leveraging LLM and RAG technology, learns from user-customized knowledge bases to provide contextually relevant answers for a wide range of queries, ensuring rapid and accurate information…
QwQ is D19F the reasoning model series developed by Qwen team, Alibaba Cloud.
Python library for code analysis with CPG and Joern
🦜🔗 Build context-aware reasoning applications
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
Data processing for and with foundation models! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷
Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks
Pytorch Geometric Tutorials
Code for the paper - Source Code Vulnerability Detection: Combining Code Language Models and Code Property Graph
A curated list of papers and resources based on "Large Language Models on Graphs: A Comprehensive Survey" (TKDE)
Graph Neural Network Library for PyTorch
给新员工和实习生的生存指南。 Good Luck and Survive!
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x faster with 70% less VRAM.
CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
wolfecameron / nanoMoE
Forked from karpathy/nanoGPTAn extension of the nanoGPT repository for training small MOE models.
🚀🚀 「大模型」2小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 2h!
EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL
Train transformer language models with reinforcement learning.