Lists (1)
Sort Name ascending (A-Z)
Stars
Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.
Writing AI Conference Papers: A Handbook for Beginners
A curated list for Efficient Large Language Models
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & V…
✨✨Latest Advances on Multimodal Large Language Models
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.
Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)
Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.
Recent LLM-based CV and related works. Welcome to comment/contribute!
mPLUG-Owl: The Powerful Multi-modal Large Language Model Family
Strong and Open Vision Language Assistant for Mobile Devices
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
We will make the code and data public in this repository after legal and user privacy review.
modular domain generalization: https://pypi.org/project/domainlab/
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
This repository contains the code a 4945 nd datasets for our ICCV-W paper 'Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts'
[CVPR 2021] Scan2Cap: Context-aware Dense Captioning in RGB-D Scans
Clustering and autoencoder implementation based on a scenario concept
Open-source implementations of OpenAI Gym MuJoCo environments for use with the OpenAI Gym Reinforcement Learning Research Platform.