Model Card for DraPLASMID-2.5b-v1

This model is a fine-tuned version of the Nucleotide Transformer (2.5B parameters, multi-species) for Antimicrobial Resistance (PLASMID) prediction, optimized for handling class imbalance and training efficiency.

Model Details

Model Description

This model is a fine-tuned version of InstaDeepAI's Nucleotide Transformer (2.5B parameters, multi-species) designed for binary classification of nucleotide sequences to predict Antimicrobial Resistance (PLASMID). It leverages LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning and includes optimizations for class imbalance and training efficiency, with checkpointing to handle Google Colab's 24-hour runtime limit. The model was trained on a dataset of positive (PLASMID) and negative (non-PLASMID) sequences.

  • Developed by: Blaise Alako
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: alakob
  • Model type: Sequence Classification
  • Language(s) (NLP): Nucleotide sequences
  • License: [More Information Needed]
  • Finetuned from model [optional]: InstaDeepAI/nucleotide-transformer-2.5b-multi-species

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

This model can be used directly for predicting whether a given nucleotide sequence is associated with Antimicrobial Resistance (PLASMID) without additional fine-tuning.

Downstream Use [optional]

The model can be further fine-tuned for specific PLASMID-related tasks or integrated into larger bioinformatics pipelines for genomic analysis.

Out-of-Scope Use

The model is not intended for general-purpose sequence analysis beyond PLASMID prediction, nor for non-biological sequence data. Misuse could include applying it to unrelated classification tasks where its training data and architecture are not applicable.

Bias, Risks, and Limitations

The model may exhibit bias due to imbalances in the training dataset or underrepresentation of certain PLASMID mechanisms. It is limited by the quality and diversity of the training sequences and may not generalize well to rare or novel PLASMID variants.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Validation on diverse datasets and careful interpretation of predictions are recommended.

How to Get Started with the Model

Use the code below to get started with the model:

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import get_peft_model, LoraConfig

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("InstaDeepAI/nucleotide-transformer-2.5b-multi-species")
model = AutoModelForSequenceClassification.from_pretrained("alakob/DraPLASMID-2.5b-v1")

# Example inference
sequence = "ATGC..."  # Replace with your nucleotide sequence
inputs = tokenizer(sequence, truncation=True, max_length=1000, return_tensors="pt")
outputs = model(**inputs)
prediction = outputs.logits.argmax(-1).item()  # 0 = non-PLASMID, 1 = PLASMID

## Training Details

### Training Data

The model was trained on the DraPLASMID-2.5b-v1 dataset, consisting of 1200 overlapping sequences:

- **Negative sequences (non-PLASMID):**  
  `DSM_20231.fasta`, `ecoli-k12.fasta`, `FDA.fasta`

- **Positive sequences (PLASMID):**  
 Plasmid sequences

### Training Procedure

#### Preprocessing [optional]

Sequences were tokenized using the Nucleotide Transformer tokenizer with a maximum length of 1000 tokens and truncation applied where necessary.

#### Training Hyperparameters

- **Training regime:** fp16 mixed precision  
- **Learning rate:** 5e-5  
- **Batch size:** 8 (with gradient accumulation steps = 8)  
- **Epochs:** 10  
- **Optimizer:** AdamW (default in Hugging Face Trainer)  
- **Scheduler:** Linear with 10% warmup  
- **LoRA parameters:** `r=32`, `alpha=64`, `dropout=0.1`, `target_modules=["query", "value"]`

#### Speeds, Sizes, Times [optional]

Training was performed on Google Colab with checkpointing every 500 steps, retaining the last 3 checkpoints.  
Exact throughput and times depend on Colab's hardware allocation (typically T4 GPU).

---

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

The test set was derived from a 10% split of the DraPLASMID-2.5b-v1 dataset, stratified by PLASMID labels.

#### Factors

Evaluation was performed across PLASMID and non-PLASMID classes.

#### Metrics

- **Accuracy:** Proportion of correct predictions  
- **F1 Score:** Harmonic mean of precision and recall (primary metric)  
- **Precision:** Positive predictive value  
- **Recall:** Sensitivity  
- **ROC-AUC:** Area under the receiver operating characteristic curve

### Results

[More Information Needed]

#### Summary

[More Information Needed]

---

## Model Examination [optional]

[More Information Needed]

---

## Environmental Impact

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** Google Colab GPU (typically NVIDIA T4)  
- **Hours used:** [More Information Needed]  
- **Cloud Provider:** Google Colab  
- **Compute Region:** [More Information Needed]  
- **Carbon Emitted:** [More Information Needed]

---

## Technical Specifications [optional]

### Model Architecture and Objective

The model uses the Nucleotide Transformer architecture (2.5B parameters) with a sequence classification head, fine-tuned with LoRA for PLASMID prediction.

### Compute Infrastructure

Training was performed on Google Colab with persistent storage via Google Drive.

#### Hardware

- NVIDIA T4 GPU (typical Colab allocation)

#### Software

- Transformers (Hugging Face)  
- PyTorch  
- PEFT (Parameter-Efficient Fine-Tuning)  
- Weights & Biases (wandb) for logging

---

## Citation [optional]

**BibTeX:**  
[More Information Needed]

**APA:**  
[More Information Needed]

---

## Glossary [optional]

- **PLASMID:** Antimicrobial Resistance  
- **LoRA:** Low-Rank Adaptation  
- **Nucleotide Transformer:** A transformer-based model for nucleotide sequence analysis

---

## More Information [optional]

[More Information Needed]

---

## Model Card Authors [optional]

Blaise Alako

---

## Model Card Contact

[More Information Needed]
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support