8000 GitHub - daemon/rank_llm: Repository for prompt-decoding using LLMs (GPT3.5, GPT4, and Vicuna)
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

daemon/rank_llm

 
 

Repository files navigation

RankLLM (WIP)

We offer a suite of prompt decoders, albeit with a current focus on RankVicuna. Some of the code in this repository is borrowed from RankGPT!

Releases

current_version = 0.0.5

📟 Instructions

More instructions to be added soon!

🦙🐧 Model Zoo

The following is a table of our models hosted on HuggingFace:

Model Name Hugging Face Identifier/Link
RankVicuna 7B - V1 castorini/rank_vicuna_7b_v1
RankVicuna 7B - V1 - No Data Augmentation castorini/rank_vicuna_7b_v1_noda
RankVicuna 7B - V1 - FP16 castorini/rank_vicuna_7b_v1_fp16
RankVicuna 7B - V1 - No Data Augmentation - FP16 castorini/rank_vicuna_7b_v1_noda_fp16

✨ References

If you use RankLLM, please cite the following paper:

@ARTICLE{pradeep2023rankvicuna,
  title   = {RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models},
  author  = {Ronak Pradeep and Sahel Sharifymoghaddam and Jimmy Lin},
  year    = {2023},
  journal = {arXiv preprint arXiv: 2309.15088}
}

🙏 Acknowledgments

This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada.

About

Repository for prompt-decoding using LLMs (GPT3.5, GPT4, and Vicuna)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%
0