Auto-RAG: Autonomous Retrieval-Augmented Generation for Large Language Models
Paper
•
2411.19443
•
Published
Tian Yu, Shaolei Zhang, and Yang Feng*
You can directly deploy the model using vllm, such as:
CUDA_VISIBLE_DEVICES=6,7 python -m vllm.entrypoints.openai.api_server \
--model PATH_TO_MODEL\
--gpu-memory-utilization 0.9 \
-tp 2 \
--max-model-len 8192\
--port 8000\
--host 0.0.0.0
@article{yu2024autorag,
title={Auto-RAG: Autonomous Retrieval-Augmented Generation for Large Language Models},
author={Tian Yu and Shaolei Zhang and Yang Feng},
year={2024},
eprint={2411.19443},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.19443},
}