Update README.md
Browse files
README.md
CHANGED
@@ -1,128 +1,131 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card for
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
|
|
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:**
|
23 |
-
- **Model type:**
|
24 |
-
- **Language(s) (NLP):**
|
25 |
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:**
|
27 |
|
28 |
### Model Sources [optional]
|
29 |
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
- **Repository:** [More Information Needed]
|
33 |
- **Paper [optional]:** [More Information Needed]
|
34 |
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
|
46 |
### Downstream Use [optional]
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
-
Use the code below to get started with the model
|
73 |
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
|
82 |
-
|
|
|
83 |
|
84 |
-
|
|
|
85 |
|
86 |
-
|
87 |
|
88 |
#### Preprocessing [optional]
|
89 |
|
90 |
-
|
91 |
-
|
92 |
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
-
- **Training regime:**
|
|
|
|
|
|
|
|
|
|
|
|
|
96 |
|
97 |
#### Speeds, Sizes, Times [optional]
|
98 |
|
99 |
-
|
|
|
100 |
|
101 |
-
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
### Testing Data, Factors & Metrics
|
108 |
|
109 |
#### Testing Data
|
110 |
|
111 |
-
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
|
115 |
#### Factors
|
116 |
|
117 |
-
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
|
121 |
#### Metrics
|
122 |
|
123 |
-
|
124 |
-
|
125 |
-
|
|
|
|
|
126 |
|
127 |
### Results
|
128 |
|
@@ -130,70 +133,81 @@ Use the code below to get started with the model.
|
|
130 |
|
131 |
#### Summary
|
132 |
|
|
|
133 |
|
|
|
134 |
|
135 |
## Model Examination [optional]
|
136 |
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
[More Information Needed]
|
140 |
|
141 |
-
|
142 |
|
143 |
-
|
144 |
|
145 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
|
147 |
-
- **Hardware Type:**
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:**
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
- **Carbon Emitted:** [More Information Needed]
|
152 |
|
|
|
|
|
153 |
## Technical Specifications [optional]
|
154 |
|
155 |
### Model Architecture and Objective
|
156 |
|
157 |
-
|
158 |
|
159 |
### Compute Infrastructure
|
160 |
|
161 |
-
|
162 |
|
163 |
#### Hardware
|
164 |
|
165 |
-
|
166 |
|
167 |
#### Software
|
168 |
|
169 |
-
|
|
|
|
|
|
|
170 |
|
171 |
-
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
|
175 |
-
|
176 |
|
|
|
177 |
[More Information Needed]
|
178 |
|
179 |
-
**APA:**
|
180 |
-
|
181 |
[More Information Needed]
|
182 |
|
|
|
|
|
183 |
## Glossary [optional]
|
184 |
|
185 |
-
|
|
|
|
|
186 |
|
187 |
-
|
188 |
|
189 |
## More Information [optional]
|
190 |
|
191 |
[More Information Needed]
|
192 |
|
|
|
|
|
193 |
## Model Card Authors [optional]
|
194 |
|
195 |
-
|
|
|
|
|
196 |
|
197 |
## Model Card Contact
|
198 |
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- nucleotide-transformer
|
5 |
+
- PLASMID-prediction
|
6 |
+
- bioinformatics
|
7 |
+
- sequence-classification
|
8 |
+
- LoRA
|
9 |
---
|
10 |
|
11 |
+
# Model Card for DraPLASMID-2.5b-v1
|
|
|
|
|
|
|
12 |
|
13 |
+
This model is a fine-tuned version of the Nucleotide Transformer (2.5B parameters, multi-species) for Antimicrobial Resistance (PLASMID) prediction, optimized for handling class imbalance and training efficiency.
|
14 |
|
15 |
## Model Details
|
16 |
|
17 |
### Model Description
|
18 |
|
19 |
+
This model is a fine-tuned version of InstaDeepAI's Nucleotide Transformer (2.5B parameters, multi-species) designed for binary classification of nucleotide sequences to predict Antimicrobial Resistance (PLASMID). It leverages LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning and includes optimizations for class imbalance and training efficiency, with checkpointing to handle Google Colab's 24-hour runtime limit. The model was trained on a dataset of positive (PLASMID) and negative (non-PLASMID) sequences.
|
20 |
|
21 |
+
- **Developed by:** Blaise Alako
|
|
|
|
|
22 |
- **Funded by [optional]:** [More Information Needed]
|
23 |
+
- **Shared by [optional]:** alakob
|
24 |
+
- **Model type:** Sequence Classification
|
25 |
+
- **Language(s) (NLP):** Nucleotide sequences
|
26 |
- **License:** [More Information Needed]
|
27 |
+
- **Finetuned from model [optional]:** InstaDeepAI/nucleotide-transformer-2.5b-multi-species
|
28 |
|
29 |
### Model Sources [optional]
|
30 |
|
|
|
|
|
31 |
- **Repository:** [More Information Needed]
|
32 |
- **Paper [optional]:** [More Information Needed]
|
33 |
- **Demo [optional]:** [More Information Needed]
|
34 |
|
35 |
## Uses
|
36 |
|
|
|
|
|
37 |
### Direct Use
|
38 |
|
39 |
+
This model can be used directly for predicting whether a given nucleotide sequence is associated with Antimicrobial Resistance (PLASMID) without additional fine-tuning.
|
|
|
|
|
40 |
|
41 |
### Downstream Use [optional]
|
42 |
|
43 |
+
The model can be further fine-tuned for specific PLASMID-related tasks or integrated into larger bioinformatics pipelines for genomic analysis.
|
|
|
|
|
44 |
|
45 |
### Out-of-Scope Use
|
46 |
|
47 |
+
The model is not intended for general-purpose sequence analysis beyond PLASMID prediction, nor for non-biological sequence data. Misuse could include applying it to unrelated classification tasks where its training data and architecture are not applicable.
|
|
|
|
|
48 |
|
49 |
## Bias, Risks, and Limitations
|
50 |
|
51 |
+
The model may exhibit bias due to imbalances in the training dataset or underrepresentation of certain PLASMID mechanisms. It is limited by the quality and diversity of the training sequences and may not generalize well to rare or novel PLASMID variants.
|
|
|
|
|
52 |
|
53 |
### Recommendations
|
54 |
|
55 |
+
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Validation on diverse datasets and careful interpretation of predictions are recommended.
|
|
|
|
|
56 |
|
57 |
## How to Get Started with the Model
|
58 |
|
59 |
+
Use the code below to get started with the model:
|
60 |
|
61 |
+
```python
|
62 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
63 |
+
from peft import get_peft_model, LoraConfig
|
64 |
+
|
65 |
+
# Load tokenizer and model
|
66 |
+
tokenizer = AutoTokenizer.from_pretrained("InstaDeepAI/nucleotide-transformer-2.5b-multi-species")
|
67 |
+
model = AutoModelForSequenceClassification.from_pretrained("alakob/DraPLASMID-2.5b-v1")
|
68 |
+
|
69 |
+
# Example inference
|
70 |
+
sequence = "ATGC..." # Replace with your nucleotide sequence
|
71 |
+
inputs = tokenizer(sequence, truncation=True, max_length=1000, return_tensors="pt")
|
72 |
+
outputs = model(**inputs)
|
73 |
+
prediction = outputs.logits.argmax(-1).item() # 0 = non-PLASMID, 1 = PLASMID
|
74 |
|
75 |
## Training Details
|
76 |
|
77 |
### Training Data
|
78 |
|
79 |
+
The model was trained on the DraPLASMID-2.5b-v1 dataset, consisting of 1200 overlapping sequences:
|
80 |
|
81 |
+
- **Negative sequences (non-PLASMID):**
|
82 |
+
`DSM_20231.fasta`, `ecoli-k12.fasta`, `FDA.fasta`
|
83 |
|
84 |
+
- **Positive sequences (PLASMID):**
|
85 |
+
Plasmid sequences
|
86 |
|
87 |
+
### Training Procedure
|
88 |
|
89 |
#### Preprocessing [optional]
|
90 |
|
91 |
+
Sequences were tokenized using the Nucleotide Transformer tokenizer with a maximum length of 1000 tokens and truncation applied where necessary.
|
|
|
92 |
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
+
- **Training regime:** fp16 mixed precision
|
96 |
+
- **Learning rate:** 5e-5
|
97 |
+
- **Batch size:** 8 (with gradient accumulation steps = 8)
|
98 |
+
- **Epochs:** 10
|
99 |
+
- **Optimizer:** AdamW (default in Hugging Face Trainer)
|
100 |
+
- **Scheduler:** Linear with 10% warmup
|
101 |
+
- **LoRA parameters:** `r=32`, `alpha=64`, `dropout=0.1`, `target_modules=["query", "value"]`
|
102 |
|
103 |
#### Speeds, Sizes, Times [optional]
|
104 |
|
105 |
+
Training was performed on Google Colab with checkpointing every 500 steps, retaining the last 3 checkpoints.
|
106 |
+
Exact throughput and times depend on Colab's hardware allocation (typically T4 GPU).
|
107 |
|
108 |
+
---
|
109 |
|
110 |
## Evaluation
|
111 |
|
|
|
|
|
112 |
### Testing Data, Factors & Metrics
|
113 |
|
114 |
#### Testing Data
|
115 |
|
116 |
+
The test set was derived from a 10% split of the DraPLASMID-2.5b-v1 dataset, stratified by PLASMID labels.
|
|
|
|
|
117 |
|
118 |
#### Factors
|
119 |
|
120 |
+
Evaluation was performed across PLASMID and non-PLASMID classes.
|
|
|
|
|
121 |
|
122 |
#### Metrics
|
123 |
|
124 |
+
- **Accuracy:** Proportion of correct predictions
|
125 |
+
- **F1 Score:** Harmonic mean of precision and recall (primary metric)
|
126 |
+
- **Precision:** Positive predictive value
|
127 |
+
- **Recall:** Sensitivity
|
128 |
+
- **ROC-AUC:** Area under the receiver operating characteristic curve
|
129 |
|
130 |
### Results
|
131 |
|
|
|
133 |
|
134 |
#### Summary
|
135 |
|
136 |
+
[More Information Needed]
|
137 |
|
138 |
+
---
|
139 |
|
140 |
## Model Examination [optional]
|
141 |
|
|
|
|
|
142 |
[More Information Needed]
|
143 |
|
144 |
+
---
|
145 |
|
146 |
+
## Environmental Impact
|
147 |
|
148 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
149 |
|
150 |
+
- **Hardware Type:** Google Colab GPU (typically NVIDIA T4)
|
151 |
+
- **Hours used:** [More Information Needed]
|
152 |
+
- **Cloud Provider:** Google Colab
|
153 |
+
- **Compute Region:** [More Information Needed]
|
154 |
- **Carbon Emitted:** [More Information Needed]
|
155 |
|
156 |
+
---
|
157 |
+
|
158 |
## Technical Specifications [optional]
|
159 |
|
160 |
### Model Architecture and Objective
|
161 |
|
162 |
+
The model uses the Nucleotide Transformer architecture (2.5B parameters) with a sequence classification head, fine-tuned with LoRA for PLASMID prediction.
|
163 |
|
164 |
### Compute Infrastructure
|
165 |
|
166 |
+
Training was performed on Google Colab with persistent storage via Google Drive.
|
167 |
|
168 |
#### Hardware
|
169 |
|
170 |
+
- NVIDIA T4 GPU (typical Colab allocation)
|
171 |
|
172 |
#### Software
|
173 |
|
174 |
+
- Transformers (Hugging Face)
|
175 |
+
- PyTorch
|
176 |
+
- PEFT (Parameter-Efficient Fine-Tuning)
|
177 |
+
- Weights & Biases (wandb) for logging
|
178 |
|
179 |
+
---
|
|
|
|
|
180 |
|
181 |
+
## Citation [optional]
|
182 |
|
183 |
+
**BibTeX:**
|
184 |
[More Information Needed]
|
185 |
|
186 |
+
**APA:**
|
|
|
187 |
[More Information Needed]
|
188 |
|
189 |
+
---
|
190 |
+
|
191 |
## Glossary [optional]
|
192 |
|
193 |
+
- **PLASMID:** Antimicrobial Resistance
|
194 |
+
- **LoRA:** Low-Rank Adaptation
|
195 |
+
- **Nucleotide Transformer:** A transformer-based model for nucleotide sequence analysis
|
196 |
|
197 |
+
---
|
198 |
|
199 |
## More Information [optional]
|
200 |
|
201 |
[More Information Needed]
|
202 |
|
203 |
+
---
|
204 |
+
|
205 |
## Model Card Authors [optional]
|
206 |
|
207 |
+
Blaise Alako
|
208 |
+
|
209 |
+
---
|
210 |
|
211 |
## Model Card Contact
|
212 |
|
213 |
+
[More Information Needed]
|