Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
configs:
|
| 3 |
+
- config_name: FPB
|
| 4 |
+
data_files:
|
| 5 |
+
- split: train
|
| 6 |
+
path: train.csv
|
| 7 |
+
- split: test
|
| 8 |
+
path: test.csv
|
| 9 |
+
task_categories:
|
| 10 |
+
- text-classification
|
| 11 |
+
- question-answering
|
| 12 |
+
- zero-shot-classification
|
| 13 |
+
language:
|
| 14 |
+
- en
|
| 15 |
+
tags:
|
| 16 |
+
- finance
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# Domain Adaptation of Large Language Models
|
| 20 |
+
This repo contains the **FPB dataset** used in our **ICLR 2024** paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
|
| 21 |
+
|
| 22 |
+
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
|
| 23 |
+
|
| 24 |
+
### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗
|
| 25 |
+
|
| 26 |
+
**************************** **Updates** ****************************
|
| 27 |
+
* 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets
|
| 28 |
+
* 2024/1/16: 🎉 Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!🎉
|
| 29 |
+
* 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B.
|
| 30 |
+
* 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B.
|
| 31 |
+
* 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B.
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
## Domain-Specific LLaMA-1
|
| 35 |
+
### LLaMA-1-7B
|
| 36 |
+
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
|
| 37 |
+
|
| 38 |
+
<p align='center'>
|
| 39 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
|
| 40 |
+
</p>
|
| 41 |
+
|
| 42 |
+
### LLaMA-1-13B
|
| 43 |
+
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
|
| 44 |
+
|
| 45 |
+
## Domain-Specific LLaMA-2-Chat
|
| 46 |
+
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
|
| 47 |
+
|
| 48 |
+
## Domain-Specific Tasks
|
| 49 |
+
|
| 50 |
+
### Pre-templatized/Formatted Testing Splits
|
| 51 |
+
To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
|
| 52 |
+
|
| 53 |
+
**Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
|
| 54 |
+
|
| 55 |
+
### Raw Datasets
|
| 56 |
+
We have also uploaded the raw training and testing splits, for facilitating fine-tuning or other usages:
|
| 57 |
+
- [ChemProt](https://huggingface.co/datasets/AdaptLLM/ChemProt)
|
| 58 |
+
- [RCT](https://huggingface.co/datasets/AdaptLLM/RCT)
|
| 59 |
+
- [ConvFinQA](https://huggingface.co/datasets/AdaptLLM/ConvFinQA)
|
| 60 |
+
- [FiQA_SA](https://huggingface.co/datasets/AdaptLLM/FiQA_SA)
|
| 61 |
+
- [Headline](https://huggingface.co/datasets/AdaptLLM/Headline)
|
| 62 |
+
- [NER](https://huggingface.co/datasets/AdaptLLM/NER)
|
| 63 |
+
- [FPB](https://huggingface.co/datasets/AdaptLLM/FPB)
|
| 64 |
+
|
| 65 |
+
The other datasets used in our paper have already been available in huggingface, so you can directly load them with the following code:
|
| 66 |
+
```python
|
| 67 |
+
from datasets import load_dataset
|
| 68 |
+
|
| 69 |
+
# MQP:
|
| 70 |
+
dataset = load_dataset('medical_questions_pairs')
|
| 71 |
+
# PubmedQA:
|
| 72 |
+
dataset = load_dataset('bigbio/pubmed_qa')
|
| 73 |
+
# USMLE:
|
| 74 |
+
dataset=load_dataset('GBaker/MedQA-USMLE-4-options', cache_dir=cache_dir)
|
| 75 |
+
# SCOTUS
|
| 76 |
+
dataset = load_dataset("lex_glue", 'scotus')
|
| 77 |
+
# CaseHOLD
|
| 78 |
+
dataset = load_dataset("lex_glue", 'case_hold')
|
| 79 |
+
# UNFAIR-ToS
|
| 80 |
+
dataset = load_dataset("lex_glue", 'unfair_tos')
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
## Citation
|
| 84 |
+
If you find our work helpful, please cite us:
|
| 85 |
+
```bibtex
|
| 86 |
+
@inproceedings{
|
| 87 |
+
cheng2024adapting,
|
| 88 |
+
title={Adapting Large Language Models via Reading Comprehension},
|
| 89 |
+
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
|
| 90 |
+
booktitle={The Twelfth International Conference on Learning Representations},
|
| 91 |
+
year={2024},
|
| 92 |
+
url={https://openreview.net/forum?id=y886UXPEZ0}
|
| 93 |
+
}
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
and the original dataset:
|
| 97 |
+
```bibtex
|
| 98 |
+
@inproceedings{FiQA-SA,
|
| 99 |
+
author = {Macedo Maia and
|
| 100 |
+
Siegfried Handschuh and
|
| 101 |
+
Andr{\'{e}} Freitas and
|
| 102 |
+
Brian Davis and
|
| 103 |
+
Ross McDermott and
|
| 104 |
+
Manel Zarrouk and
|
| 105 |
+
Alexandra Balahur},
|
| 106 |
+
title = {WWW'18 Open Challenge: Financial Opinion Mining and Question Answering},
|
| 107 |
+
booktitle = {{WWW} (Companion Volume)},
|
| 108 |
+
pages = {1941--1942},
|
| 109 |
+
publisher = {{ACM}},
|
| 110 |
+
year = {2018}
|
| 111 |
+
}
|
| 112 |
+
```
|