math-similarity commited on
Commit
6e1701b
·
verified ·
1 Parent(s): d2a9d46

update model card

Browse files
Files changed (1) hide show
  1. README.md +25 -44
README.md CHANGED
@@ -8,11 +8,11 @@ tags:
8
 
9
  ---
10
 
11
- # {MODEL_NAME}
12
 
13
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
 
15
- <!--- Describe your model here -->
16
 
17
  ## Usage (Sentence-Transformers)
18
 
@@ -26,9 +26,10 @@ Then you can use the model like this:
26
 
27
  ```python
28
  from sentence_transformers import SentenceTransformer
29
- sentences = ["This is an example sentence", "Each sentence is converted"]
 
30
 
31
- model = SentenceTransformer('{MODEL_NAME}')
32
  embeddings = model.encode(sentences)
33
  print(embeddings)
34
  ```
@@ -51,11 +52,12 @@ def mean_pooling(model_output, attention_mask):
51
 
52
 
53
  # Sentences we want sentence embeddings for
54
- sentences = ['This is an example sentence', 'Each sentence is converted']
 
55
 
56
  # Load model from HuggingFace Hub
57
- tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
58
- model = AutoModel.from_pretrained('{MODEL_NAME}')
59
 
60
  # Tokenize sentences
61
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -71,56 +73,35 @@ print("Sentence embeddings:")
71
  print(sentence_embeddings)
72
  ```
73
 
 
74
 
 
75
 
76
- ## Evaluation Results
77
 
78
- <!--- Describe how your model was evaluated -->
79
 
80
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
81
 
 
82
 
83
- ## Training
84
- The model was trained with the parameters:
85
 
86
- **DataLoader**:
87
 
88
- `torch.utils.data.dataloader.DataLoader` of length 21967 with parameters:
89
- ```
90
- {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
- ```
92
 
93
- **Loss**:
94
 
95
- `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
96
 
97
- Parameters of the fit()-Method:
98
- ```
99
- {
100
- "epochs": 10,
101
- "evaluation_steps": 0,
102
- "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
103
- "max_grad_norm": 1,
104
- "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
105
- "optimizer_params": {
106
- "lr": 2e-05
107
- },
108
- "scheduler": "WarmupLinear",
109
- "steps_per_epoch": null,
110
- "warmup_steps": 10000,
111
- "weight_decay": 0.01
112
- }
113
- ```
114
 
 
 
115
 
116
- ## Full Model Architecture
117
- ```
118
- SentenceTransformer(
119
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
120
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
121
- )
122
- ```
123
 
124
  ## Citing & Authors
125
 
126
- <!--- Describe where people can find more information -->
 
8
 
9
  ---
10
 
11
+ # Bert-MLM_arXiv-MP-class_zbMath
12
 
13
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
 
15
+ The model is specifically designed to compute similarities of short mathematical texts.
16
 
17
  ## Usage (Sentence-Transformers)
18
 
 
26
 
27
  ```python
28
  from sentence_transformers import SentenceTransformer
29
+ sentences = ["In this paper we show how to compute the $\\Lambda_{\\alpha}$ norm, $\\alpha\\ge 0$, using the dyadic grid. This result is a consequence of the description of the Hardy spaces $H^p(R^N)$ in terms of dyadic and special atoms.",
30
+ "We show that a determinant of Stirling cycle numbers counts unlabeled acyclic single-source automata. The proof involves a bijection from these automata to certain marked lattice paths and a sign-reversing involution to evaluate the determinant."]
31
 
32
+ model = SentenceTransformer('math-similarity/Bert-MLM_arXiv-MP-class_zbMath')
33
  embeddings = model.encode(sentences)
34
  print(embeddings)
35
  ```
 
52
 
53
 
54
  # Sentences we want sentence embeddings for
55
+ sentences = ["In this paper we show how to compute the $\\Lambda_{\\alpha}$ norm, $\\alpha\\ge 0$, using the dyadic grid. This result is a consequence of the description of the Hardy spaces $H^p(R^N)$ in terms of dyadic and special atoms.",
56
+ "We show that a determinant of Stirling cycle numbers counts unlabeled acyclic single-source automata. The proof involves a bijection from these automata to certain marked lattice paths and a sign-reversing involution to evaluate the determinant."]
57
 
58
  # Load model from HuggingFace Hub
59
+ tokenizer = AutoTokenizer.from_pretrained('math-similarity/Bert-MLM_arXiv-MP-class_zbMath')
60
+ model = AutoModel.from_pretrained('math-similarity/Bert-MLM_arXiv-MP-class_zbMath')
61
 
62
  # Tokenize sentences
63
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
73
  print(sentence_embeddings)
74
  ```
75
 
76
+ ---------
77
 
78
+ ## Background
79
 
80
+ ## Intended uses
81
 
82
+ Our model is intended to be used as a sentence and short paragraph encoder for mathematical texts. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
83
 
84
+ By default, input text longer than 256 word pieces is truncated.
85
 
86
+ ## Training procedure
87
 
88
+ ### Domain-adaption
 
89
 
90
+ We use the domain-adapted [math-similarity/Bert-MLM_arXiv](https://huggingface.co/math-similarity/Bert-MLM_arXiv) model. Please refer to the model card for more detailed information about the domain-adaption procedure.
91
 
92
+ ### Pooling
 
 
 
93
 
94
+ We add a mean-pooling layer on top of the domain-adapted model.
95
 
96
+ ### Fine-tuning
97
 
98
+ We fine-tune the model using a cosine-similarity objective. Formally, it computes the vectors `u = model(sentence_A)` and `v = model(sentence_B)` and measures the cosine-similarity between the two. By default, it minimizes the following loss: `||input_label - cos_score_transformation(cosine_sim(u,v))||_2`, with MSE as loss function.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
+ We use title-pairs from [zbMath](https://zbmath.org) as fine-tuning dataset and model semantic similarity with their MSC codes. Two titles are defined as similar, if they share their primary MSC<sub>5</sub> and another secondary MSC<sub>5</sub>. Otherwise, they are defined as semantically dissimilar.
101
+ The training set contains 351.472 title pairs and the evaluation set contains 43.935 pairs. See the [training notebook](https://github.com/math-collab/text-similarity/blob/main/Bert-MLM%20%2B%20mean%20pooling%20%2B%20fine-tune%20zbMath-class.ipynb) for more information.
102
 
103
+ Unfortunately, we cannot include a dataset with titles due to licensing issues. However, we have created a dataset than only contains the respective zbMath identifiers (also known as an) with primary and secondary MSC classification but without titles. It is available as [datasets/math-similarity/class-zbmath-identifier](https://huggingface.co/datasets/math-similarity/class-zbmath-identifier).
 
 
 
 
 
 
104
 
105
  ## Citing & Authors
106
 
107
+ This model is an additional resource for the [CICM'24](https://cicm-conference.org/2024/cicm.php) submission *On modelling similarity of short mathematical texts*.