debjit20504 commited on
Commit
4dd3b02
Β·
1 Parent(s): 9e2aecf

updated readme file

Browse files
Files changed (1) hide show
  1. README.md +72 -1
README.md CHANGED
@@ -1,3 +1,74 @@
 
 
 
 
 
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🧬 mRNA-BioBERT: Fine-Tuned BioBERT for mRNA Sentence Classification
2
+ **Fine-tuned BioBERT model for classifying mRNA-related sentences in biomedical research papers.**
3
+
4
+ πŸ”— **Hugging Face Model Link**: [debjit20504/mRNA-biobert](https://huggingface.co/debjit20504/mRNA-biobert)
5
+
6
  ---
7
+
8
+ ## πŸ“Œ Overview
9
+ **mRNA-BioBERT** is a fine-tuned version of [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1), trained specifically for **classifying sentences** as **mRNA-related (relevant) or not (irrelevant)**. The model is useful for **automating literature reviews**, **extracting relevant sentences**, and **identifying key insights** in genomic research.
10
+
11
+ βœ” **Base Model**: `dmis-lab/biobert-base-cased-v1.1`
12
+ βœ” **Fine-tuning Method**: **LoRA (Low-Rank Adaptation)**
13
+ βœ” **Dataset**: **Curated biomedical text corpus containing labeled mRNA-relevant and non-relevant sentences**
14
+ βœ” **Task**: **Binary classification (1 = relevant, 0 = not relevant)**
15
+ βœ” **Trained on**: **RTX A6000 GPU (5 epochs, batch size 32, learning rate 2e-5)**
16
+
17
  ---
18
+
19
+ ## πŸ“– Model Applications
20
+ βœ… **Biomedical NLP** – Extracting meaningful information from biomedical literature.
21
+ βœ… **mRNA Research** – Identifying sentences discussing mRNA mechanisms.
22
+ βœ… **Automated Literature Review** – Filtering relevant studies efficiently.
23
+ βœ… **Genomics & Bioinformatics** – Enhancing data retrieval from scientific texts.
24
+
25
+ ---
26
+
27
+ ## πŸš€ How to Use the Model
28
+ ### 1️⃣ Install Dependencies
29
+ ```bash
30
+ pip install transformers torch
31
+ ```
32
+ ```python
33
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
34
+ import torch
35
+
36
+ # Load the model and tokenizer
37
+ model_name = "debjit20504/mRNA-biobert"
38
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
39
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
40
+
41
+ # Move model to GPU or MPS (for Mac)
42
+ device = torch.device("mps") if torch.backends.mps.is_available() else torch.device("cuda" if torch.cuda.is_available() else "cpu")
43
+ model.to(device)
44
+ model.eval()
45
+
46
+ def classify_text(text):
47
+ inputs = tokenizer(text, return_tensors="pt").to(device)
48
+ with torch.no_grad():
49
+ output = model(**inputs)
50
+ label = torch.argmax(output.logits, dim=1).item()
51
+ return "Relevant (mRNA-related)" if label == 1 else "Not Relevant"
52
+
53
+ # Example Test
54
+ sample_text = "mRNA translation is regulated by miRNAs."
55
+ print(f"Classification: {classify_text(sample_text)}")
56
+ ```
57
+
58
+ ## πŸ“Š Training Details
59
+ - Dataset: Biomedical text dataset with 429,785 relevant sentences and 87,966 irrelevant sentences.
60
+ - Fine-Tuning Method: LoRA (Low-Rank Adaptation) for efficient training.
61
+ - Training Hardware: NVIDIA RTX A6000 GPU.
62
+ - Training Settings:
63
+ - Batch size: 32
64
+ - Learning rate: 2e-5
65
+ - Optimizer: AdamW
66
+ - Warmup steps: 1000
67
+ - Epochs: 5
68
+ - Mixed precision (fp16): βœ… Enabled for efficiency.
69
+
70
+ ## πŸ“¬ Contact
71
+ For any questions or collaborations, reach out via:
72
+
73
+ **πŸ“§ Email**: [email protected]
74
+ **πŸ”— LinkedIn**: https://www.linkedin.com/in/debjit-pramanik-88a837171/