Update README.md
Browse files
README.md
CHANGED
@@ -10,8 +10,6 @@ base_model:
|
|
10 |
pipeline_tag: feature-extraction
|
11 |
---
|
12 |
|
13 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
14 |
-
should probably proofread and complete it, then remove this comment. -->
|
15 |
|
16 |
# Genomic_context_bert
|
17 |
|
@@ -20,11 +18,13 @@ This model is a pre-trained version of [BERT model](https://huggingface.co/googl
|
|
20 |
|
21 |
## Model description
|
22 |
|
23 |
-
|
24 |
|
25 |
## Intended uses & limitations
|
26 |
|
27 |
-
|
|
|
|
|
28 |
|
29 |
## Training and evaluation data
|
30 |
|
|
|
10 |
pipeline_tag: feature-extraction
|
11 |
---
|
12 |
|
|
|
|
|
13 |
|
14 |
# Genomic_context_bert
|
15 |
|
|
|
18 |
|
19 |
## Model description
|
20 |
|
21 |
+
The model is based on the BERT-base architecture and was pre-trained with the following configurations: 12 hidden layers, a hidden size of 512, and 8 attention heads. Pre-training was conducted using self-supervised masked language modeling (MLM) as the objective, with a 20% token masking probability. The model was trained on approximately 30,000 bacterial genomes using 8 Tesla V100-SXM2-32GB GPUs over a 24-hour period. This configuration enables the model to learn contextual embeddings that capture information from the genomic neighborhood of genes, providing a foundation for downstream analyses.
|
22 |
|
23 |
## Intended uses & limitations
|
24 |
|
25 |
+
The model we trained is a BERT-base architecture pre-trained from scratch using approximately 30,000 bacterial genomes. The primary intended use of this model is to generate contextual embeddings of bacterial proteins based on the genomic neighborhood of the gene encoding the protein. These embeddings capture contextual information from the surrounding genomic sequences, which may reflect functional or biological signals.
|
26 |
+
|
27 |
+
The main limitation of this model is that it has been pre-trained exclusively on bacterial genomes and lacks fine-tuning with a specific classification head. Consequently, it cannot directly perform tasks such as functional prediction or classification out-of-the-box. Instead, it serves as a tool for generating contextual representations, which can be further analyzed or utilized in downstream applications, where these embeddings may provide valuable functional insights when paired with additional training or analysis.
|
28 |
|
29 |
## Training and evaluation data
|
30 |
|