Commit
·
00a6645
1
Parent(s):
3f2aa1f
Update README.md
Browse files
README.md
CHANGED
|
@@ -2,6 +2,7 @@
|
|
| 2 |
license: cc-by-nc-sa-4.0
|
| 3 |
pipeline_tag: fill-mask
|
| 4 |
language: en
|
|
|
|
| 5 |
tags:
|
| 6 |
- long-documents
|
| 7 |
datasets:
|
|
@@ -15,36 +16,36 @@ model-index:
|
|
| 15 |
|
| 16 |
## Model description
|
| 17 |
|
| 18 |
-
This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/
|
| 19 |
|
| 20 |
-
The model has been warm-started re-using the weights of miniature BERT
|
| 21 |
|
| 22 |
-
HAT
|
| 23 |
|
| 24 |
## Intended uses & limitations
|
| 25 |
|
| 26 |
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
|
| 27 |
-
See the [model hub](https://huggingface.co/models?filter=hierarchical-transformer) to look for fine-tuned versions on a task that interests you.
|
| 28 |
|
| 29 |
-
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification or question answering.
|
| 30 |
|
| 31 |
## How to use
|
| 32 |
|
| 33 |
-
You can use this model directly
|
| 34 |
|
| 35 |
```python
|
| 36 |
from transformers import AutoTokenizer, AutoModelforForMaskedLM
|
| 37 |
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-I3-mini-1024", trust_remote_code=True)
|
| 38 |
-
mlm_model = AutoModelforForMaskedLM(
|
| 39 |
|
| 40 |
```
|
| 41 |
|
| 42 |
-
You can also fine-
|
| 43 |
|
| 44 |
```python
|
| 45 |
from transformers import AutoTokenizer, AutoModelforSequenceClassification
|
| 46 |
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-I3-mini-1024", trust_remote_code=True)
|
| 47 |
-
doc_classifier = AutoModelforSequenceClassification(
|
| 48 |
```
|
| 49 |
|
| 50 |
## Limitations and bias
|
|
@@ -99,11 +100,13 @@ The following hyperparameters were used during training:
|
|
| 99 |
|
| 100 |
## Citing
|
| 101 |
|
| 102 |
-
If you use HAT in your research, please cite
|
|
|
|
|
|
|
| 103 |
|
| 104 |
```
|
| 105 |
@misc{chalkidis-etal-2022-hat,
|
| 106 |
-
url = {https://arxiv.org/abs/
|
| 107 |
author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
|
| 108 |
title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
|
| 109 |
publisher = {arXiv},
|
|
@@ -112,3 +115,4 @@ If you use HAT in your research, please cite [An Exploration of Hierarchical Att
|
|
| 112 |
```
|
| 113 |
|
| 114 |
|
|
|
|
|
|
| 2 |
license: cc-by-nc-sa-4.0
|
| 3 |
pipeline_tag: fill-mask
|
| 4 |
language: en
|
| 5 |
+
arxiv: 2210.05529
|
| 6 |
tags:
|
| 7 |
- long-documents
|
| 8 |
datasets:
|
|
|
|
| 16 |
|
| 17 |
## Model description
|
| 18 |
|
| 19 |
+
This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529).
|
| 20 |
|
| 21 |
+
The model has been warm-started re-using the weights of miniature BERT (Turc et al., 2019), and continued pre-trained for MLM following the paradigm of Longformer released by Beltagy et al. (2020). It supports sequences of length up to 1,024.
|
| 22 |
|
| 23 |
+
HAT uses hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think of segments as paragraphs or sentences.
|
| 24 |
|
| 25 |
## Intended uses & limitations
|
| 26 |
|
| 27 |
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
|
| 28 |
+
See the [model hub](https://huggingface.co/models?filter=hierarchical-transformer) to look for other versions of HAT or fine-tuned versions on a task that interests you.
|
| 29 |
|
| 30 |
+
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification, or question answering.
|
| 31 |
|
| 32 |
## How to use
|
| 33 |
|
| 34 |
+
You can use this model directly for masked language modeling:
|
| 35 |
|
| 36 |
```python
|
| 37 |
from transformers import AutoTokenizer, AutoModelforForMaskedLM
|
| 38 |
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-I3-mini-1024", trust_remote_code=True)
|
| 39 |
+
mlm_model = AutoModelforForMaskedLM("kiddothe2b/hierarchical-transformer-I3-mini-1024", trust_remote_code=True)
|
| 40 |
|
| 41 |
```
|
| 42 |
|
| 43 |
+
You can also fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks:
|
| 44 |
|
| 45 |
```python
|
| 46 |
from transformers import AutoTokenizer, AutoModelforSequenceClassification
|
| 47 |
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-I3-mini-1024", trust_remote_code=True)
|
| 48 |
+
doc_classifier = AutoModelforSequenceClassification("kiddothe2b/hierarchical-transformer-I3-mini-1024", trust_remote_code=True)
|
| 49 |
```
|
| 50 |
|
| 51 |
## Limitations and bias
|
|
|
|
| 100 |
|
| 101 |
## Citing
|
| 102 |
|
| 103 |
+
If you use HAT in your research, please cite:
|
| 104 |
+
|
| 105 |
+
[An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).
|
| 106 |
|
| 107 |
```
|
| 108 |
@misc{chalkidis-etal-2022-hat,
|
| 109 |
+
url = {https://arxiv.org/abs/2210.05529},
|
| 110 |
author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
|
| 111 |
title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
|
| 112 |
publisher = {arXiv},
|
|
|
|
| 115 |
```
|
| 116 |
|
| 117 |
|
| 118 |
+
|