File size: 3,358 Bytes
8b4fef4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4dd3b02
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
tags:
  - text-classification
  - transformers
  - biobert
  - miRNA
  - biomedical
  - LoRA
  - fine-tuning
library_name: transformers
datasets:
  - custom-biomedical-dataset
license: apache-2.0
---

# 🧬 miRNA-BioBERT: Fine-Tuned BioBERT for miRNA Sentence Classification  
**Fine-tuned BioBERT model for classifying miRNA-related sentences in biomedical research papers.**  

<!-- πŸ”— **Hugging Face Model Link**: [debjit20504/miRNA-biobert](https://huggingface.co/debjit20504/miRNA-biobert)   -->

---

## πŸ“Œ Overview  
**miRNA-BioBERT** is a fine-tuned version of [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1), trained specifically for **classifying sentences** as **miRNA-related (relevant) or not (irrelevant)**. The model is useful for **automating literature reviews**, **extracting relevant sentences**, and **identifying key insights** in genomic research.  

βœ” **Base Model**: `dmis-lab/biobert-base-cased-v1.1`  
βœ” **Fine-tuning Method**: **LoRA (Low-Rank Adaptation)**  
βœ” **Dataset**: **Curated biomedical text corpus containing labeled miRNA-relevant and non-relevant sentences**  
βœ” **Task**: **Binary classification (1 = functional, 0 = non-functional)**  
βœ” **Trained on**: **RTX A6000 GPU (5 epochs, batch size 32, learning rate 2e-5)**  

## πŸš€ How to Use the Model  
### 1️⃣ Install Dependencies  
```bash  
pip install transformers torch
```
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer  
import torch  

# Load the model and tokenizer  
model_name = "debjit20504/miRNA-biobert"  
tokenizer = AutoTokenizer.from_pretrained(model_name)  
model = AutoModelForSequenceClassification.from_pretrained(model_name)  

# Move model to GPU or MPS (for Mac)  
device = torch.device("mps") if torch.backends.mps.is_available() else torch.device("cuda" if torch.cuda.is_available() else "cpu")  
model.to(device)  
model.eval()

def classify_text(text):  
    inputs = tokenizer(text, return_tensors="pt").to(device)  
    with torch.no_grad():  
        output = model(**inputs)  
        label = torch.argmax(output.logits, dim=1).item()  
    return "functional" if label == 1 else "Non-functional"  

# Example Test  
sample_text = "miRNA translation is regulated by miRNAs."  
print(f"Classification: {classify_text(sample_text)}")  
```

## πŸ“Š Training Details
- Dataset: Biomedical text dataset with 429,785 relevant sentences and 87,966 irrelevant sentences.
- Fine-Tuning Method: LoRA (Low-Rank Adaptation) for efficient training.
- Training Hardware: NVIDIA RTX A6000 GPU.
- Training Settings:
    - Batch size: 32
    - Learning rate: 2e-5
    - Optimizer: AdamW
    - Warmup steps: 1000
    - Epochs: 5
    - Mixed precision (fp16): βœ… Enabled for efficiency.

---

## πŸ“– Model Applications  
βœ… **Biomedical NLP** – Extracting meaningful information from biomedical literature.  
βœ… **miRNA Research** – Identifying sentences discussing miRNA mechanisms.  
βœ… **Automated Literature Review** – Filtering relevant studies efficiently.  
βœ… **Genomics & Bioinformatics** – Enhancing data retrieval from scientific texts.  

---

## πŸ“¬ Contact
For any questions or collaborations, reach out via:  

**πŸ“§ Email**: [email protected]  
**πŸ”— LinkedIn**: https://www.linkedin.com/in/debjit-pramanik-88a837171/