Update README.md
Browse files
README.md
CHANGED
@@ -1,95 +1,116 @@
|
|
1 |
---
|
2 |
-
library_name:
|
3 |
-
pipeline_tag:
|
4 |
tags:
|
5 |
-
-
|
6 |
- feature-extraction
|
7 |
-
-
|
|
|
8 |
- transformers
|
9 |
-
|
|
|
|
|
10 |
---
|
11 |
|
12 |
# Hierarchy-Transformers/HiT-MPNet-WordNetNoun
|
13 |
|
14 |
-
|
15 |
-
|
16 |
-
<!--- Describe your model here -->
|
17 |
-
|
18 |
-
## Usage (Sentence-Transformers)
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
-
pip install -U sentence-transformers
|
24 |
-
```
|
25 |
|
26 |
-
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
|
|
|
|
|
|
|
|
|
31 |
|
32 |
-
|
33 |
-
embeddings = model.encode(sentences)
|
34 |
-
print(embeddings)
|
35 |
-
```
|
36 |
|
|
|
37 |
|
|
|
|
|
38 |
|
39 |
-
## Usage
|
40 |
-
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
|
41 |
|
42 |
-
|
43 |
-
from transformers import AutoTokenizer, AutoModel
|
44 |
-
import torch
|
45 |
|
|
|
46 |
|
47 |
-
|
48 |
-
def mean_pooling(model_output, attention_mask):
|
49 |
-
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
|
50 |
-
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
51 |
-
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
52 |
|
|
|
53 |
|
54 |
-
|
55 |
-
sentences = ['This is an example sentence', 'Each sentence is converted']
|
56 |
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
|
61 |
-
#
|
62 |
-
|
|
|
63 |
|
64 |
-
#
|
65 |
-
|
66 |
-
model_output = model(**encoded_input)
|
67 |
|
68 |
-
#
|
69 |
-
|
70 |
|
71 |
-
|
72 |
-
|
73 |
```
|
74 |
|
|
|
75 |
|
|
|
76 |
|
77 |
-
|
78 |
-
|
79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
|
81 |
-
|
|
|
82 |
|
83 |
|
84 |
|
85 |
## Full Model Architecture
|
86 |
```
|
87 |
HierarchyTransformer(
|
88 |
-
(0): Transformer({'max_seq_length':
|
89 |
-
(1): Pooling({'word_embedding_dimension':
|
90 |
)
|
91 |
```
|
92 |
|
93 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
|
95 |
-
|
|
|
1 |
---
|
2 |
+
library_name: hierarchy-transformers
|
3 |
+
pipeline_tag: feature-extraction
|
4 |
tags:
|
5 |
+
- hierarchy-transformers
|
6 |
- feature-extraction
|
7 |
+
- hierarchy-encoding
|
8 |
+
- subsumption-relationships
|
9 |
- transformers
|
10 |
+
license: apache-2.0
|
11 |
+
language:
|
12 |
+
- en
|
13 |
---
|
14 |
|
15 |
# Hierarchy-Transformers/HiT-MPNet-WordNetNoun
|
16 |
|
17 |
+
A **Hi**erarchy **T**ransformer Encoder (HiT) model that explicitly encodes entities according to their hierarchical relationships.
|
|
|
|
|
|
|
|
|
18 |
|
19 |
+
### Model Description
|
20 |
|
21 |
+
<!-- Provide a longer summary of what this model is. -->
|
|
|
|
|
22 |
|
23 |
+
HiT-MPNet-WordNetNoun is a HiT model trained on WordNet's noun hierarchy with random negative sampling.
|
24 |
|
25 |
+
- **Developed by:** [Yuan He](https://www.yuanhe.wiki/), Zhangdie Yuan, Jiaoyan Chen, and Ian Horrocks
|
26 |
+
- **Model type:** Hierarchy Transformer Encoder (HiT)
|
27 |
+
- **License:** Apache license 2.0
|
28 |
+
- **Hierarchy**: WordNet (Noun)
|
29 |
+
- **Dataset**: Download `wordnet.zip` from the [Zenodo link](https://zenodo.org/doi/10.5281/zenodo.10511042)
|
30 |
+
- **Pre-trained model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
|
31 |
+
- **Training Objectives**: Jointly optimised on *hyperbolic clustering* and *hyperbolic centripetal* losses
|
32 |
|
33 |
+
### Model Sources
|
|
|
|
|
|
|
34 |
|
35 |
+
<!-- Provide the basic links for the model. -->
|
36 |
|
37 |
+
- **Repository:** https://github.com/KRR-Oxford/HierarchyTransformers
|
38 |
+
- **Paper:** [Language Models as Hierarchy Encoders](tbd)
|
39 |
|
40 |
+
## Usage
|
|
|
41 |
|
42 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
|
|
|
|
43 |
|
44 |
+
HiT models are used to encode entities (presented as texts) and predict their hierarhical relationships in hyperbolic space.
|
45 |
|
46 |
+
### Get Started
|
|
|
|
|
|
|
|
|
47 |
|
48 |
+
Install `hierarchy_transformers` (check our [repository](https://github.com/KRR-Oxford/HierarchyTransformers)) through `pip` or `GitHub`.
|
49 |
|
50 |
+
Use the code below to get started with the model.
|
|
|
51 |
|
52 |
+
```python
|
53 |
+
from hierarchy_transformers import HierarchyTransformer
|
54 |
+
from hierarchy_transformers.utils import get_torch_device
|
55 |
|
56 |
+
# set up the device (use cpu if no gpu found)
|
57 |
+
gpu_id = 0
|
58 |
+
device = get_torch_device(gpu_id)
|
59 |
|
60 |
+
# load the model
|
61 |
+
model = HierarchyTransformer.load_pretrained('Hierarchy-Transformers/HiT-MiniLM-L12-WordNet', device)
|
|
|
62 |
|
63 |
+
# entity names to be encoded.
|
64 |
+
entity_names = ["computer", "personal computer", "fruit", "berry"]
|
65 |
|
66 |
+
# get the entity embeddings
|
67 |
+
entity_embeddings = model.encode(entity_names)
|
68 |
```
|
69 |
|
70 |
+
### Default Probing for Subsumption Prediction
|
71 |
|
72 |
+
Use the entity embeddings to predict the subsumption relationships between them.
|
73 |
|
74 |
+
```python
|
75 |
+
# suppose we want to compare "personal computer" and "computer", "berry" and "fruit"
|
76 |
+
child_entity_embeddings = model.encode(["personal computer", "berry"], convert_to_tensor=True)
|
77 |
+
parent_entity_embeddings = model.encode(["computer", "fruit"], convert_to_tensor=True)
|
78 |
+
|
79 |
+
# compute the hyperbolic distances and norms of entity embeddings
|
80 |
+
dists = model.manifold.dist(child_entity_embeddings, parent_entity_embeddings)
|
81 |
+
child_norms = model.manifold.dist0(child_entity_embeddings)
|
82 |
+
parent_norms = model.manifold.dist0(parent_entity_embeddings)
|
83 |
+
|
84 |
+
# use the empirical function for subsumption prediction proposed in the paper
|
85 |
+
# `centri_score_weight` and the overall threshold are determined on the validation set
|
86 |
+
subsumption_scores = dists + centri_score_weight * (parent_norms - child_norms)
|
87 |
+
```
|
88 |
|
89 |
+
Training and evaluation scripts are available at [GitHub](https://github.com/KRR-Oxford/HierarchyTransformers).
|
90 |
+
Technical details are presented in the [paper](tbd).
|
91 |
|
92 |
|
93 |
|
94 |
## Full Model Architecture
|
95 |
```
|
96 |
HierarchyTransformer(
|
97 |
+
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
|
98 |
+
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
|
99 |
)
|
100 |
```
|
101 |
|
102 |
+
## Citation
|
103 |
+
|
104 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
105 |
+
|
106 |
+
**BibTeX:**
|
107 |
+
|
108 |
+
Preprint on Arxiv:
|
109 |
+
|
110 |
+
[More Information Needed]
|
111 |
+
|
112 |
+
|
113 |
+
|
114 |
+
## Model Card Contact
|
115 |
|
116 |
+
For any queries or feedback, please contact Yuan He ([email protected]).
|