SCION Questions Embedding Model

This model is fine-tuned from sentence-transformers/all-MiniLM-L6-v2 on a dataset of questions about SCION internet architecture paired with relevant document passages.

Model description

The model was fine-tuned to optimize for retrieval performance in RAG applications related to SCION internet architecture.

Intended uses & limitations

This model is specifically trained for retrieving relevant passages from a corpus of SCION Internet Architecture related documentation, specifications and research papers.

Training procedure

The model was trained using sentence-transformers with MultipleNegativesRankingLoss on query-document pairs.

Performance

Metric Base Model Fine-tuned Improvement
ndcg@10 0.6009 0.7928 +31.92%
mrr 0.5476 0.7475 +36.52%
hits@1 0.4395 0.6457 +46.94%
hits@3 0.6211 0.8327 +34.08%
hits@10 0.7686 0.9323 +21.30%
Downloads last month
633
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.