Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {} | duyv/TTS-Model | null | [
"safetensors",
"region:us"
] | null | 2024-04-30T12:42:33+00:00 |
|
feature-extraction | transformers |
## Bedrock Titan Text Embeddings v2
This repository contains the MTEB scores and usage examples of Bedrock Titan Text Embeddings v2. You can use the embedding model either via the Bedrock InvokeModel API or via Bedrock's batch jobs. For RAG use cases we recommend the former to embed queries during search (latency optimized) and the latter to index corpus (throughput optimized).
## Using Bedrock's InvokeModel API
```python
import json
import boto3
class TitanEmbeddings(object):
accept = "application/json"
content_type = "application/json"
def __init__(self, model_id="amazon.titan-embed-text-v2:0"):
self.bedrock = boto3.client(service_name='bedrock-runtime')
self.model_id = model_id
def __call__(self, text, dimensions, normalize=True):
"""
Returns Titan Embeddings
Args:
text (str): text to embed
dimensions (int): Number of output dimensions.
normalize (bool): Whether to return the normalized embedding or not.
Return:
List[float]: Embedding
"""
body = json.dumps({
"inputText": text,
"dimensions": dimensions,
"normalize": normalize
})
response = self.bedrock.invoke_model(
body=body, modelId=self.model_id, accept=self.accept, contentType=self.content_type
)
response_body = json.loads(response.get('body').read())
return response_body['embedding']
if __name__ == '__main__':
"""
Entrypoint for Amazon Titan Embeddings V2 - Text example.
"""
dimensions = 1024
normalize = True
titan_embeddings_v2 = TitanEmbeddings(model_id="amazon.titan-embed-text-v2:0")
input_text = "What are the different services that you offer?"
embedding = titan_embeddings_v2(input_text, dimensions, normalize)
print(f"{input_text=}")
print(f"{embedding[:10]=}")
```
## Using Bedrock's batch jobs
```python
import requests
from aws_requests_auth.boto_utils import BotoAWSRequestsAuth
region = "us-east-1"
base_uri = f"bedrock.{region}.amazonaws.com"
batch_job_uri = f"https://{base_uri}/model-invocation-job/"
# For details on how to set up an IAM role for batch inference, see
# https://docs.aws.amazon.com/bedrock/latest/userguide/batch-inference-permissions.html
role_arn = "arn:aws:iam::111122223333:role/my-batch-inference-role"
payload = {
"inputDataConfig": {
"s3InputDataConfig": {
"s3Uri": "s3://my-input-bucket/batch-input/",
"s3InputFormat": "JSONL"
}
},
"jobName": "embeddings-v2-batch-job",
"modelId": "amazon.titan-embed-text-v2:0",
"outputDataConfig": {
"s3OutputDataConfig": {
"s3Uri": "s3://my-output-bucket/batch-output/"
}
},
"roleArn": role_arn
}
request_auth = BotoAWSRequestsAuth(
aws_host=base_uri,
aws_region=region,
aws_service="bedrock"
)
response= requests.request("POST", batch_job_uri, json=payload, auth=request_auth)
print(response.json())
``` | {"language": ["en", "fr", "de", "es", "ja", "zh", "hi", "ar", "it", "pt", "sv", "ko", "he", "cs", "tr", "tl", "ru", "nl", "pl", "ta", "mr", "ml", "te", "kn", "vi", "id", "fa", "hu", "el", "ro", "da", "th", "fi", "sk", "uk", "no", "bg", "ca", "sr", "hr", "lt", "sl", "et", "la", "bn", "lv", "ms", "bs", "sq", "az", "gl", "is", "ka", "mk", "eu", "hy", "ne", "ur", "kk", "mn", "be", "uz", "km", "nn", "gu", "my", "cy", "eo", "si", "tt", "sw", "af", "ga", "pa", "ku", "ky", "tg", "or", "lo", "fo", "mt", "so", "lb", "am", "oc", "jv", "ha", "ps", "sa", "fy", "mg", "as", "ba", "br", "tk", "co", "dv", "rw", "ht", "yi", "sd", "zu", "gd", "bo", "ug", "mi", "rm", "xh", "su", "yo"], "license": "other", "tags": ["feature-extraction", "sentence-similarity", "mteb"], "license_name": "amazon-service-terms", "license_link": "https://aws.amazon.com/service-terms/", "inference": false, "model-index": [{"name": "Titan-text-embeddings-v2", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 79.31343283582089}, {"type": "ap", "value": 43.9465851246623}, {"type": "f1", "value": 73.6131343594374}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (de)", "type": "mteb/amazon_counterfactual", "config": "de", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 70.94218415417559}, {"type": "ap", "value": 82.30115528468109}, {"type": "f1", "value": 69.37963699148699}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en-ext)", "type": "mteb/amazon_counterfactual", "config": "en-ext", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 82.29385307346327}, {"type": "ap", "value": 29.956638709449372}, {"type": "f1", "value": 68.88158061498754}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (ja)", "type": "mteb/amazon_counterfactual", "config": "ja", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 80.06423982869379}, {"type": "ap", "value": 25.2439835379337}, {"type": "f1", "value": 65.53837311569734}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 76.66435}, {"type": "ap", "value": 70.76988138513991}, {"type": "f1", "value": 76.54117595647566}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 35.276}, {"type": "f1", "value": 34.90637768461089}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (de)", "type": "mteb/amazon_reviews_multi", "config": "de", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 38.826}, {"type": "f1", "value": 37.71339372044998}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (es)", "type": "mteb/amazon_reviews_multi", "config": "es", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 39.385999999999996}, {"type": "f1", "value": 38.24347249789392}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (fr)", "type": "mteb/amazon_reviews_multi", "config": "fr", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 39.472}, {"type": "f1", "value": 38.37157729490788}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (ja)", "type": "mteb/amazon_reviews_multi", "config": "ja", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 35.897999999999996}, {"type": "f1", "value": 35.187204289589346}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (zh)", "type": "mteb/amazon_reviews_multi", "config": "zh", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 36.068}, {"type": "f1", "value": 35.042441064207175}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 27.027}, {"type": "map_at_10", "value": 42.617}, {"type": "map_at_100", "value": 43.686}, {"type": "map_at_1000", "value": 43.695}, {"type": "map_at_3", "value": 37.684}, {"type": "map_at_5", "value": 40.532000000000004}, {"type": "mrr_at_1", "value": 27.667}, {"type": "mrr_at_10", "value": 42.88}, {"type": "mrr_at_100", "value": 43.929}, {"type": "mrr_at_1000", "value": 43.938}, {"type": "mrr_at_3", "value": 37.933}, {"type": "mrr_at_5", "value": 40.774}, {"type": "ndcg_at_1", "value": 27.027}, {"type": "ndcg_at_10", "value": 51.312000000000005}, {"type": "ndcg_at_100", "value": 55.696}, {"type": "ndcg_at_1000", "value": 55.896}, {"type": "ndcg_at_3", "value": 41.124}, {"type": "ndcg_at_5", "value": 46.283}, {"type": "precision_at_1", "value": 27.027}, {"type": "precision_at_10", "value": 7.9159999999999995}, {"type": "precision_at_100", "value": 0.979}, {"type": "precision_at_1000", "value": 0.099}, {"type": "precision_at_3", "value": 17.022000000000002}, {"type": "precision_at_5", "value": 12.731}, {"type": "recall_at_1", "value": 27.027}, {"type": "recall_at_10", "value": 79.161}, {"type": "recall_at_100", "value": 97.937}, {"type": "recall_at_1000", "value": 99.431}, {"type": "recall_at_3", "value": 51.06699999999999}, {"type": "recall_at_5", "value": 63.656}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 41.775131599226874}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 34.134214263072494}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 63.2885651257187}, {"type": "mrr", "value": 76.37712702809655}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 89.53738990667027}, {"type": "cos_sim_spearman", "value": 87.13210584606783}, {"type": "euclidean_pearson", "value": 87.33265405736388}, {"type": "euclidean_spearman", "value": 87.18632394893399}, {"type": "manhattan_pearson", "value": 87.33673166528312}, {"type": "manhattan_spearman", "value": 86.9736685010257}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (de-en)", "type": "mteb/bucc-bitext-mining", "config": "de-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 98.32985386221294}, {"type": "f1", "value": 98.18371607515658}, {"type": "precision", "value": 98.1106471816284}, {"type": "recall", "value": 98.32985386221294}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (fr-en)", "type": "mteb/bucc-bitext-mining", "config": "fr-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 98.20603125687872}, {"type": "f1", "value": 98.04461075647515}, {"type": "precision", "value": 97.96390050627338}, {"type": "recall", "value": 98.20603125687872}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (ru-en)", "type": "mteb/bucc-bitext-mining", "config": "ru-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 94.8874263941808}, {"type": "f1", "value": 94.57568410114305}, {"type": "precision", "value": 94.42096755570951}, {"type": "recall", "value": 94.8874263941808}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB BUCC (zh-en)", "type": "mteb/bucc-bitext-mining", "config": "zh-en", "split": "test", "revision": "d51519689f32196a32af33b075a01d0e7c51e252"}, "metrics": [{"type": "accuracy", "value": 96.78778304370721}, {"type": "f1", "value": 96.75267684746358}, {"type": "precision", "value": 96.73512374934175}, {"type": "recall", "value": 96.78778304370721}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 84.3051948051948}, {"type": "f1", "value": 83.97876601554812}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 35.005716163806575}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 30.999141295578852}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 36.153}, {"type": "map_at_10", "value": 48.742000000000004}, {"type": "map_at_100", "value": 50.253}, {"type": "map_at_1000", "value": 50.373999999999995}, {"type": "map_at_3", "value": 45.089}, {"type": "map_at_5", "value": 47.08}, {"type": "mrr_at_1", "value": 44.635000000000005}, {"type": "mrr_at_10", "value": 54.715}, {"type": "mrr_at_100", "value": 55.300000000000004}, {"type": "mrr_at_1000", "value": 55.337}, {"type": "mrr_at_3", "value": 52.527}, {"type": "mrr_at_5", "value": 53.76499999999999}, {"type": "ndcg_at_1", "value": 44.635000000000005}, {"type": "ndcg_at_10", "value": 55.31}, {"type": "ndcg_at_100", "value": 60.084}, {"type": "ndcg_at_1000", "value": 61.645}, {"type": "ndcg_at_3", "value": 50.876999999999995}, {"type": "ndcg_at_5", "value": 52.764}, {"type": "precision_at_1", "value": 44.635000000000005}, {"type": "precision_at_10", "value": 10.687000000000001}, {"type": "precision_at_100", "value": 1.66}, {"type": "precision_at_1000", "value": 0.212}, {"type": "precision_at_3", "value": 24.94}, {"type": "precision_at_5", "value": 17.596999999999998}, {"type": "recall_at_1", "value": 36.153}, {"type": "recall_at_10", "value": 67.308}, {"type": "recall_at_100", "value": 87.199}, {"type": "recall_at_1000", "value": 96.904}, {"type": "recall_at_3", "value": 53.466}, {"type": "recall_at_5", "value": 59.512}, {"type": "map_at_1", "value": 32.0}, {"type": "map_at_10", "value": 43.646}, {"type": "map_at_100", "value": 44.933}, {"type": "map_at_1000", "value": 45.049}, {"type": "map_at_3", "value": 40.333999999999996}, {"type": "map_at_5", "value": 42.108000000000004}, {"type": "mrr_at_1", "value": 40.382}, {"type": "mrr_at_10", "value": 49.738}, {"type": "mrr_at_100", "value": 50.331}, {"type": "mrr_at_1000", "value": 50.364}, {"type": "mrr_at_3", "value": 47.442}, {"type": "mrr_at_5", "value": 48.719}, {"type": "ndcg_at_1", "value": 40.382}, {"type": "ndcg_at_10", "value": 49.808}, {"type": "ndcg_at_100", "value": 54.053}, {"type": "ndcg_at_1000", "value": 55.753}, {"type": "ndcg_at_3", "value": 45.355000000000004}, {"type": "ndcg_at_5", "value": 47.215}, {"type": "precision_at_1", "value": 40.382}, {"type": "precision_at_10", "value": 9.58}, {"type": "precision_at_100", "value": 1.488}, {"type": "precision_at_1000", "value": 0.192}, {"type": "precision_at_3", "value": 22.272}, {"type": "precision_at_5", "value": 15.604999999999999}, {"type": "recall_at_1", "value": 32.0}, {"type": "recall_at_10", "value": 60.839}, {"type": "recall_at_100", "value": 78.869}, {"type": "recall_at_1000", "value": 89.384}, {"type": "recall_at_3", "value": 47.226}, {"type": "recall_at_5", "value": 52.864}, {"type": "map_at_1", "value": 44.084}, {"type": "map_at_10", "value": 56.591}, {"type": "map_at_100", "value": 57.533}, {"type": "map_at_1000", "value": 57.583}, {"type": "map_at_3", "value": 53.356}, {"type": "map_at_5", "value": 55.236}, {"type": "mrr_at_1", "value": 50.532999999999994}, {"type": "mrr_at_10", "value": 59.974000000000004}, {"type": "mrr_at_100", "value": 60.557}, {"type": "mrr_at_1000", "value": 60.584}, {"type": "mrr_at_3", "value": 57.774}, {"type": "mrr_at_5", "value": 59.063}, {"type": "ndcg_at_1", "value": 50.532999999999994}, {"type": "ndcg_at_10", "value": 62.265}, {"type": "ndcg_at_100", "value": 65.78}, {"type": "ndcg_at_1000", "value": 66.76299999999999}, {"type": "ndcg_at_3", "value": 57.154}, {"type": "ndcg_at_5", "value": 59.708000000000006}, {"type": "precision_at_1", "value": 50.532999999999994}, {"type": "precision_at_10", "value": 9.85}, {"type": "precision_at_100", "value": 1.247}, {"type": "precision_at_1000", "value": 0.13699999999999998}, {"type": "precision_at_3", "value": 25.434}, {"type": "precision_at_5", "value": 17.279}, {"type": "recall_at_1", "value": 44.084}, {"type": "recall_at_10", "value": 75.576}, {"type": "recall_at_100", "value": 90.524}, {"type": "recall_at_1000", "value": 97.38799999999999}, {"type": "recall_at_3", "value": 61.792}, {"type": "recall_at_5", "value": 68.112}, {"type": "map_at_1", "value": 29.203000000000003}, {"type": "map_at_10", "value": 38.078}, {"type": "map_at_100", "value": 39.144}, {"type": "map_at_1000", "value": 39.222}, {"type": "map_at_3", "value": 35.278999999999996}, {"type": "map_at_5", "value": 36.812}, {"type": "mrr_at_1", "value": 31.299}, {"type": "mrr_at_10", "value": 39.879}, {"type": "mrr_at_100", "value": 40.832}, {"type": "mrr_at_1000", "value": 40.891}, {"type": "mrr_at_3", "value": 37.513999999999996}, {"type": "mrr_at_5", "value": 38.802}, {"type": "ndcg_at_1", "value": 31.299}, {"type": "ndcg_at_10", "value": 43.047999999999995}, {"type": "ndcg_at_100", "value": 48.101}, {"type": "ndcg_at_1000", "value": 49.958999999999996}, {"type": "ndcg_at_3", "value": 37.778}, {"type": "ndcg_at_5", "value": 40.257}, {"type": "precision_at_1", "value": 31.299}, {"type": "precision_at_10", "value": 6.508}, {"type": "precision_at_100", "value": 0.9530000000000001}, {"type": "precision_at_1000", "value": 0.11399999999999999}, {"type": "precision_at_3", "value": 15.744}, {"type": "precision_at_5", "value": 10.893}, {"type": "recall_at_1", "value": 29.203000000000003}, {"type": "recall_at_10", "value": 56.552}, {"type": "recall_at_100", "value": 79.21000000000001}, {"type": "recall_at_1000", "value": 92.884}, {"type": "recall_at_3", "value": 42.441}, {"type": "recall_at_5", "value": 48.399}, {"type": "map_at_1", "value": 19.029}, {"type": "map_at_10", "value": 28.410000000000004}, {"type": "map_at_100", "value": 29.773}, {"type": "map_at_1000", "value": 29.887000000000004}, {"type": "map_at_3", "value": 25.374000000000002}, {"type": "map_at_5", "value": 27.162}, {"type": "mrr_at_1", "value": 23.632}, {"type": "mrr_at_10", "value": 33.0}, {"type": "mrr_at_100", "value": 34.043}, {"type": "mrr_at_1000", "value": 34.105999999999995}, {"type": "mrr_at_3", "value": 30.245}, {"type": "mrr_at_5", "value": 31.830000000000002}, {"type": "ndcg_at_1", "value": 23.632}, {"type": "ndcg_at_10", "value": 34.192}, {"type": "ndcg_at_100", "value": 40.29}, {"type": "ndcg_at_1000", "value": 42.753}, {"type": "ndcg_at_3", "value": 28.811999999999998}, {"type": "ndcg_at_5", "value": 31.46}, {"type": "precision_at_1", "value": 23.632}, {"type": "precision_at_10", "value": 6.455}, {"type": "precision_at_100", "value": 1.095}, {"type": "precision_at_1000", "value": 0.14200000000000002}, {"type": "precision_at_3", "value": 14.096}, {"type": "precision_at_5", "value": 10.448}, {"type": "recall_at_1", "value": 19.029}, {"type": "recall_at_10", "value": 47.278999999999996}, {"type": "recall_at_100", "value": 72.977}, {"type": "recall_at_1000", "value": 90.17699999999999}, {"type": "recall_at_3", "value": 32.519}, {"type": "recall_at_5", "value": 39.156}, {"type": "map_at_1", "value": 30.983}, {"type": "map_at_10", "value": 42.595}, {"type": "map_at_100", "value": 43.906}, {"type": "map_at_1000", "value": 44.001000000000005}, {"type": "map_at_3", "value": 39.245000000000005}, {"type": "map_at_5", "value": 41.14}, {"type": "mrr_at_1", "value": 38.114}, {"type": "mrr_at_10", "value": 48.181000000000004}, {"type": "mrr_at_100", "value": 48.935}, {"type": "mrr_at_1000", "value": 48.972}, {"type": "mrr_at_3", "value": 45.877}, {"type": "mrr_at_5", "value": 47.249}, {"type": "ndcg_at_1", "value": 38.114}, {"type": "ndcg_at_10", "value": 48.793}, {"type": "ndcg_at_100", "value": 54.001999999999995}, {"type": "ndcg_at_1000", "value": 55.749}, {"type": "ndcg_at_3", "value": 43.875}, {"type": "ndcg_at_5", "value": 46.23}, {"type": "precision_at_1", "value": 38.114}, {"type": "precision_at_10", "value": 8.98}, {"type": "precision_at_100", "value": 1.3390000000000002}, {"type": "precision_at_1000", "value": 0.166}, {"type": "precision_at_3", "value": 21.303}, {"type": "precision_at_5", "value": 15.072}, {"type": "recall_at_1", "value": 30.983}, {"type": "recall_at_10", "value": 61.47}, {"type": "recall_at_100", "value": 83.14399999999999}, {"type": "recall_at_1000", "value": 94.589}, {"type": "recall_at_3", "value": 47.019}, {"type": "recall_at_5", "value": 53.445}, {"type": "map_at_1", "value": 29.707}, {"type": "map_at_10", "value": 40.900999999999996}, {"type": "map_at_100", "value": 42.369}, {"type": "map_at_1000", "value": 42.455}, {"type": "map_at_3", "value": 37.416}, {"type": "map_at_5", "value": 39.483000000000004}, {"type": "mrr_at_1", "value": 36.301}, {"type": "mrr_at_10", "value": 46.046}, {"type": "mrr_at_100", "value": 46.922999999999995}, {"type": "mrr_at_1000", "value": 46.964}, {"type": "mrr_at_3", "value": 43.436}, {"type": "mrr_at_5", "value": 45.04}, {"type": "ndcg_at_1", "value": 36.301}, {"type": "ndcg_at_10", "value": 46.955999999999996}, {"type": "ndcg_at_100", "value": 52.712}, {"type": "ndcg_at_1000", "value": 54.447}, {"type": "ndcg_at_3", "value": 41.643}, {"type": "ndcg_at_5", "value": 44.305}, {"type": "precision_at_1", "value": 36.301}, {"type": "precision_at_10", "value": 8.607}, {"type": "precision_at_100", "value": 1.34}, {"type": "precision_at_1000", "value": 0.164}, {"type": "precision_at_3", "value": 19.901}, {"type": "precision_at_5", "value": 14.429}, {"type": "recall_at_1", "value": 29.707}, {"type": "recall_at_10", "value": 59.559}, {"type": "recall_at_100", "value": 83.60499999999999}, {"type": "recall_at_1000", "value": 95.291}, {"type": "recall_at_3", "value": 44.774}, {"type": "recall_at_5", "value": 51.67}, {"type": "map_at_1", "value": 29.455416666666668}, {"type": "map_at_10", "value": 39.61333333333334}, {"type": "map_at_100", "value": 40.85875}, {"type": "map_at_1000", "value": 40.96791666666667}, {"type": "map_at_3", "value": 36.48874999999999}, {"type": "map_at_5", "value": 38.24341666666667}, {"type": "mrr_at_1", "value": 34.80258333333334}, {"type": "mrr_at_10", "value": 43.783}, {"type": "mrr_at_100", "value": 44.591833333333334}, {"type": "mrr_at_1000", "value": 44.64208333333333}, {"type": "mrr_at_3", "value": 41.38974999999999}, {"type": "mrr_at_5", "value": 42.74566666666667}, {"type": "ndcg_at_1", "value": 34.80258333333334}, {"type": "ndcg_at_10", "value": 45.2705}, {"type": "ndcg_at_100", "value": 50.31224999999999}, {"type": "ndcg_at_1000", "value": 52.27916666666667}, {"type": "ndcg_at_3", "value": 40.2745}, {"type": "ndcg_at_5", "value": 42.61575}, {"type": "precision_at_1", "value": 34.80258333333334}, {"type": "precision_at_10", "value": 7.97075}, {"type": "precision_at_100", "value": 1.2400000000000002}, {"type": "precision_at_1000", "value": 0.1595}, {"type": "precision_at_3", "value": 18.627583333333337}, {"type": "precision_at_5", "value": 13.207000000000003}, {"type": "recall_at_1", "value": 29.455416666666668}, {"type": "recall_at_10", "value": 57.66091666666665}, {"type": "recall_at_100", "value": 79.51966666666665}, {"type": "recall_at_1000", "value": 93.01883333333333}, {"type": "recall_at_3", "value": 43.580416666666665}, {"type": "recall_at_5", "value": 49.7025}, {"type": "map_at_1", "value": 27.569}, {"type": "map_at_10", "value": 34.73}, {"type": "map_at_100", "value": 35.708}, {"type": "map_at_1000", "value": 35.808}, {"type": "map_at_3", "value": 32.62}, {"type": "map_at_5", "value": 33.556999999999995}, {"type": "mrr_at_1", "value": 31.135}, {"type": "mrr_at_10", "value": 37.833}, {"type": "mrr_at_100", "value": 38.68}, {"type": "mrr_at_1000", "value": 38.749}, {"type": "mrr_at_3", "value": 35.915}, {"type": "mrr_at_5", "value": 36.751}, {"type": "ndcg_at_1", "value": 31.135}, {"type": "ndcg_at_10", "value": 39.047}, {"type": "ndcg_at_100", "value": 43.822}, {"type": "ndcg_at_1000", "value": 46.249}, {"type": "ndcg_at_3", "value": 35.115}, {"type": "ndcg_at_5", "value": 36.49}, {"type": "precision_at_1", "value": 31.135}, {"type": "precision_at_10", "value": 6.058}, {"type": "precision_at_100", "value": 0.923}, {"type": "precision_at_1000", "value": 0.121}, {"type": "precision_at_3", "value": 15.031}, {"type": "precision_at_5", "value": 10.030999999999999}, {"type": "recall_at_1", "value": 27.569}, {"type": "recall_at_10", "value": 49.332}, {"type": "recall_at_100", "value": 70.967}, {"type": "recall_at_1000", "value": 88.876}, {"type": "recall_at_3", "value": 37.858999999999995}, {"type": "recall_at_5", "value": 41.589}, {"type": "map_at_1", "value": 19.677}, {"type": "map_at_10", "value": 28.097}, {"type": "map_at_100", "value": 29.24}, {"type": "map_at_1000", "value": 29.365000000000002}, {"type": "map_at_3", "value": 25.566}, {"type": "map_at_5", "value": 26.852999999999998}, {"type": "mrr_at_1", "value": 23.882}, {"type": "mrr_at_10", "value": 31.851000000000003}, {"type": "mrr_at_100", "value": 32.757}, {"type": "mrr_at_1000", "value": 32.83}, {"type": "mrr_at_3", "value": 29.485}, {"type": "mrr_at_5", "value": 30.744}, {"type": "ndcg_at_1", "value": 23.882}, {"type": "ndcg_at_10", "value": 33.154}, {"type": "ndcg_at_100", "value": 38.491}, {"type": "ndcg_at_1000", "value": 41.274}, {"type": "ndcg_at_3", "value": 28.648}, {"type": "ndcg_at_5", "value": 30.519000000000002}, {"type": "precision_at_1", "value": 23.882}, {"type": "precision_at_10", "value": 6.117999999999999}, {"type": "precision_at_100", "value": 1.0330000000000001}, {"type": "precision_at_1000", "value": 0.145}, {"type": "precision_at_3", "value": 13.73}, {"type": "precision_at_5", "value": 9.794}, {"type": "recall_at_1", "value": 19.677}, {"type": "recall_at_10", "value": 44.444}, {"type": "recall_at_100", "value": 68.477}, {"type": "recall_at_1000", "value": 88.23}, {"type": "recall_at_3", "value": 31.708}, {"type": "recall_at_5", "value": 36.599}, {"type": "map_at_1", "value": 30.489}, {"type": "map_at_10", "value": 40.883}, {"type": "map_at_100", "value": 42.058}, {"type": "map_at_1000", "value": 42.152}, {"type": "map_at_3", "value": 37.525999999999996}, {"type": "map_at_5", "value": 39.753}, {"type": "mrr_at_1", "value": 35.541}, {"type": "mrr_at_10", "value": 44.842999999999996}, {"type": "mrr_at_100", "value": 45.673}, {"type": "mrr_at_1000", "value": 45.723}, {"type": "mrr_at_3", "value": 42.397}, {"type": "mrr_at_5", "value": 43.937}, {"type": "ndcg_at_1", "value": 35.541}, {"type": "ndcg_at_10", "value": 46.504}, {"type": "ndcg_at_100", "value": 51.637}, {"type": "ndcg_at_1000", "value": 53.535}, {"type": "ndcg_at_3", "value": 41.127}, {"type": "ndcg_at_5", "value": 44.17}, {"type": "precision_at_1", "value": 35.541}, {"type": "precision_at_10", "value": 7.864}, {"type": "precision_at_100", "value": 1.165}, {"type": "precision_at_1000", "value": 0.14300000000000002}, {"type": "precision_at_3", "value": 18.688}, {"type": "precision_at_5", "value": 13.507}, {"type": "recall_at_1", "value": 30.489}, {"type": "recall_at_10", "value": 59.378}, {"type": "recall_at_100", "value": 81.38300000000001}, {"type": "recall_at_1000", "value": 94.294}, {"type": "recall_at_3", "value": 44.946000000000005}, {"type": "recall_at_5", "value": 52.644999999999996}, {"type": "map_at_1", "value": 29.981}, {"type": "map_at_10", "value": 39.688}, {"type": "map_at_100", "value": 41.400999999999996}, {"type": "map_at_1000", "value": 41.634}, {"type": "map_at_3", "value": 36.047000000000004}, {"type": "map_at_5", "value": 38.064}, {"type": "mrr_at_1", "value": 35.375}, {"type": "mrr_at_10", "value": 44.169000000000004}, {"type": "mrr_at_100", "value": 45.07}, {"type": "mrr_at_1000", "value": 45.113}, {"type": "mrr_at_3", "value": 41.502}, {"type": "mrr_at_5", "value": 43.034}, {"type": "ndcg_at_1", "value": 35.375}, {"type": "ndcg_at_10", "value": 45.959}, {"type": "ndcg_at_100", "value": 51.688}, {"type": "ndcg_at_1000", "value": 53.714}, {"type": "ndcg_at_3", "value": 40.457}, {"type": "ndcg_at_5", "value": 43.08}, {"type": "precision_at_1", "value": 35.375}, {"type": "precision_at_10", "value": 8.953}, {"type": "precision_at_100", "value": 1.709}, {"type": "precision_at_1000", "value": 0.253}, {"type": "precision_at_3", "value": 18.775}, {"type": "precision_at_5", "value": 14.032}, {"type": "recall_at_1", "value": 29.981}, {"type": "recall_at_10", "value": 57.896}, {"type": "recall_at_100", "value": 83.438}, {"type": "recall_at_1000", "value": 95.608}, {"type": "recall_at_3", "value": 42.327}, {"type": "recall_at_5", "value": 49.069}, {"type": "map_at_1", "value": 24.59}, {"type": "map_at_10", "value": 32.999}, {"type": "map_at_100", "value": 33.987}, {"type": "map_at_1000", "value": 34.085}, {"type": "map_at_3", "value": 30.013}, {"type": "map_at_5", "value": 31.673000000000002}, {"type": "mrr_at_1", "value": 26.802}, {"type": "mrr_at_10", "value": 35.167}, {"type": "mrr_at_100", "value": 36.001}, {"type": "mrr_at_1000", "value": 36.071999999999996}, {"type": "mrr_at_3", "value": 32.562999999999995}, {"type": "mrr_at_5", "value": 34.014}, {"type": "ndcg_at_1", "value": 26.802}, {"type": "ndcg_at_10", "value": 38.21}, {"type": "ndcg_at_100", "value": 43.086999999999996}, {"type": "ndcg_at_1000", "value": 45.509}, {"type": "ndcg_at_3", "value": 32.452999999999996}, {"type": "ndcg_at_5", "value": 35.191}, {"type": "precision_at_1", "value": 26.802}, {"type": "precision_at_10", "value": 5.989}, {"type": "precision_at_100", "value": 0.928}, {"type": "precision_at_1000", "value": 0.125}, {"type": "precision_at_3", "value": 13.617}, {"type": "precision_at_5", "value": 9.797}, {"type": "recall_at_1", "value": 24.59}, {"type": "recall_at_10", "value": 52.298}, {"type": "recall_at_100", "value": 74.443}, {"type": "recall_at_1000", "value": 92.601}, {"type": "recall_at_3", "value": 36.888}, {"type": "recall_at_5", "value": 43.37}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 9.798}, {"type": "map_at_10", "value": 15.983}, {"type": "map_at_100", "value": 17.18}, {"type": "map_at_1000", "value": 17.329}, {"type": "map_at_3", "value": 13.594000000000001}, {"type": "map_at_5", "value": 14.984}, {"type": "mrr_at_1", "value": 21.564}, {"type": "mrr_at_10", "value": 31.415}, {"type": "mrr_at_100", "value": 32.317}, {"type": "mrr_at_1000", "value": 32.376}, {"type": "mrr_at_3", "value": 28.360000000000003}, {"type": "mrr_at_5", "value": 30.194}, {"type": "ndcg_at_1", "value": 21.564}, {"type": "ndcg_at_10", "value": 22.762}, {"type": "ndcg_at_100", "value": 28.199}, {"type": "ndcg_at_1000", "value": 31.284}, {"type": "ndcg_at_3", "value": 18.746}, {"type": "ndcg_at_5", "value": 20.434}, {"type": "precision_at_1", "value": 21.564}, {"type": "precision_at_10", "value": 6.755999999999999}, {"type": "precision_at_100", "value": 1.258}, {"type": "precision_at_1000", "value": 0.182}, {"type": "precision_at_3", "value": 13.507}, {"type": "precision_at_5", "value": 10.541}, {"type": "recall_at_1", "value": 9.798}, {"type": "recall_at_10", "value": 27.407999999999998}, {"type": "recall_at_100", "value": 46.659}, {"type": "recall_at_1000", "value": 64.132}, {"type": "recall_at_3", "value": 17.541999999999998}, {"type": "recall_at_5", "value": 22.137999999999998}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 8.276}, {"type": "map_at_10", "value": 18.003}, {"type": "map_at_100", "value": 23.759}, {"type": "map_at_1000", "value": 25.105}, {"type": "map_at_3", "value": 13.812}, {"type": "map_at_5", "value": 15.659999999999998}, {"type": "mrr_at_1", "value": 63.0}, {"type": "mrr_at_10", "value": 71.812}, {"type": "mrr_at_100", "value": 72.205}, {"type": "mrr_at_1000", "value": 72.21300000000001}, {"type": "mrr_at_3", "value": 70.375}, {"type": "mrr_at_5", "value": 71.188}, {"type": "ndcg_at_1", "value": 50.5}, {"type": "ndcg_at_10", "value": 36.954}, {"type": "ndcg_at_100", "value": 40.083999999999996}, {"type": "ndcg_at_1000", "value": 47.661}, {"type": "ndcg_at_3", "value": 42.666}, {"type": "ndcg_at_5", "value": 39.581}, {"type": "precision_at_1", "value": 63.0}, {"type": "precision_at_10", "value": 28.249999999999996}, {"type": "precision_at_100", "value": 8.113}, {"type": "precision_at_1000", "value": 1.7149999999999999}, {"type": "precision_at_3", "value": 47.083000000000006}, {"type": "precision_at_5", "value": 38.65}, {"type": "recall_at_1", "value": 8.276}, {"type": "recall_at_10", "value": 23.177}, {"type": "recall_at_100", "value": 45.321}, {"type": "recall_at_1000", "value": 68.742}, {"type": "recall_at_3", "value": 15.473}, {"type": "recall_at_5", "value": 18.276}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 55.605000000000004}, {"type": "f1", "value": 49.86208997523934}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 80.079}, {"type": "map_at_10", "value": 85.143}, {"type": "map_at_100", "value": 85.287}, {"type": "map_at_1000", "value": 85.297}, {"type": "map_at_3", "value": 84.533}, {"type": "map_at_5", "value": 84.953}, {"type": "mrr_at_1", "value": 86.424}, {"type": "mrr_at_10", "value": 91.145}, {"type": "mrr_at_100", "value": 91.212}, {"type": "mrr_at_1000", "value": 91.213}, {"type": "mrr_at_3", "value": 90.682}, {"type": "mrr_at_5", "value": 91.013}, {"type": "ndcg_at_1", "value": 86.424}, {"type": "ndcg_at_10", "value": 88.175}, {"type": "ndcg_at_100", "value": 88.77199999999999}, {"type": "ndcg_at_1000", "value": 88.967}, {"type": "ndcg_at_3", "value": 87.265}, {"type": "ndcg_at_5", "value": 87.813}, {"type": "precision_at_1", "value": 86.424}, {"type": "precision_at_10", "value": 10.012}, {"type": "precision_at_100", "value": 1.042}, {"type": "precision_at_1000", "value": 0.107}, {"type": "precision_at_3", "value": 32.228}, {"type": "precision_at_5", "value": 19.724}, {"type": "recall_at_1", "value": 80.079}, {"type": "recall_at_10", "value": 91.96600000000001}, {"type": "recall_at_100", "value": 94.541}, {"type": "recall_at_1000", "value": 95.824}, {"type": "recall_at_3", "value": 89.213}, {"type": "recall_at_5", "value": 90.791}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 23.006999999999998}, {"type": "map_at_10", "value": 36.923}, {"type": "map_at_100", "value": 38.932}, {"type": "map_at_1000", "value": 39.096}, {"type": "map_at_3", "value": 32.322}, {"type": "map_at_5", "value": 35.119}, {"type": "mrr_at_1", "value": 45.37}, {"type": "mrr_at_10", "value": 53.418}, {"type": "mrr_at_100", "value": 54.174}, {"type": "mrr_at_1000", "value": 54.20700000000001}, {"type": "mrr_at_3", "value": 51.132}, {"type": "mrr_at_5", "value": 52.451}, {"type": "ndcg_at_1", "value": 45.37}, {"type": "ndcg_at_10", "value": 44.799}, {"type": "ndcg_at_100", "value": 51.605000000000004}, {"type": "ndcg_at_1000", "value": 54.30500000000001}, {"type": "ndcg_at_3", "value": 41.33}, {"type": "ndcg_at_5", "value": 42.608000000000004}, {"type": "precision_at_1", "value": 45.37}, {"type": "precision_at_10", "value": 12.33}, {"type": "precision_at_100", "value": 1.9349999999999998}, {"type": "precision_at_1000", "value": 0.241}, {"type": "precision_at_3", "value": 27.828999999999997}, {"type": "precision_at_5", "value": 20.432}, {"type": "recall_at_1", "value": 23.006999999999998}, {"type": "recall_at_10", "value": 51.06699999999999}, {"type": "recall_at_100", "value": 75.917}, {"type": "recall_at_1000", "value": 92.331}, {"type": "recall_at_3", "value": 36.544}, {"type": "recall_at_5", "value": 43.449}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 38.196999999999996}, {"type": "map_at_10", "value": 55.554}, {"type": "map_at_100", "value": 56.309}, {"type": "map_at_1000", "value": 56.37799999999999}, {"type": "map_at_3", "value": 53.123}, {"type": "map_at_5", "value": 54.626}, {"type": "mrr_at_1", "value": 76.39399999999999}, {"type": "mrr_at_10", "value": 81.75}, {"type": "mrr_at_100", "value": 81.973}, {"type": "mrr_at_1000", "value": 81.982}, {"type": "mrr_at_3", "value": 80.79499999999999}, {"type": "mrr_at_5", "value": 81.393}, {"type": "ndcg_at_1", "value": 76.39399999999999}, {"type": "ndcg_at_10", "value": 64.14800000000001}, {"type": "ndcg_at_100", "value": 66.90899999999999}, {"type": "ndcg_at_1000", "value": 68.277}, {"type": "ndcg_at_3", "value": 60.529999999999994}, {"type": "ndcg_at_5", "value": 62.513}, {"type": "precision_at_1", "value": 76.39399999999999}, {"type": "precision_at_10", "value": 12.967999999999998}, {"type": "precision_at_100", "value": 1.5150000000000001}, {"type": "precision_at_1000", "value": 0.16999999999999998}, {"type": "precision_at_3", "value": 37.884}, {"type": "precision_at_5", "value": 24.294}, {"type": "recall_at_1", "value": 38.196999999999996}, {"type": "recall_at_10", "value": 64.84100000000001}, {"type": "recall_at_100", "value": 75.726}, {"type": "recall_at_1000", "value": 84.794}, {"type": "recall_at_3", "value": 56.826}, {"type": "recall_at_5", "value": 60.736000000000004}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 82.3912}, {"type": "ap", "value": 76.3949298163793}, {"type": "f1", "value": 82.30848699417406}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 19.454}, {"type": "map_at_10", "value": 31.22}, {"type": "map_at_100", "value": 32.475}, {"type": "map_at_1000", "value": 32.532}, {"type": "map_at_3", "value": 27.419}, {"type": "map_at_5", "value": 29.608}, {"type": "mrr_at_1", "value": 20.072000000000003}, {"type": "mrr_at_10", "value": 31.813999999999997}, {"type": "mrr_at_100", "value": 33.01}, {"type": "mrr_at_1000", "value": 33.062000000000005}, {"type": "mrr_at_3", "value": 28.055999999999997}, {"type": "mrr_at_5", "value": 30.218}, {"type": "ndcg_at_1", "value": 20.072000000000003}, {"type": "ndcg_at_10", "value": 38.0}, {"type": "ndcg_at_100", "value": 44.038}, {"type": "ndcg_at_1000", "value": 45.43}, {"type": "ndcg_at_3", "value": 30.219}, {"type": "ndcg_at_5", "value": 34.127}, {"type": "precision_at_1", "value": 20.072000000000003}, {"type": "precision_at_10", "value": 6.159}, {"type": "precision_at_100", "value": 0.9169999999999999}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 13.071}, {"type": "precision_at_5", "value": 9.814}, {"type": "recall_at_1", "value": 19.454}, {"type": "recall_at_10", "value": 58.931}, {"type": "recall_at_100", "value": 86.886}, {"type": "recall_at_1000", "value": 97.425}, {"type": "recall_at_3", "value": 37.697}, {"type": "recall_at_5", "value": 47.101}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 90.46283629730961}, {"type": "f1", "value": 90.22448402668293}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (de)", "type": "mteb/mtop_domain", "config": "de", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 86.91462383770076}, {"type": "f1", "value": 85.77767304705436}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (es)", "type": "mteb/mtop_domain", "config": "es", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 87.73849232821881}, {"type": "f1", "value": 87.33680109229385}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (fr)", "type": "mteb/mtop_domain", "config": "fr", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 86.22298778578141}, {"type": "f1", "value": 85.88868176519013}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (hi)", "type": "mteb/mtop_domain", "config": "hi", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 82.91860882036572}, {"type": "f1", "value": 81.38044567838352}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (th)", "type": "mteb/mtop_domain", "config": "th", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 69.90235081374323}, {"type": "f1", "value": 68.12897827044782}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 66.0031919744642}, {"type": "f1", "value": 48.13490278120492}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (de)", "type": "mteb/mtop_intent", "config": "de", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 63.260073260073256}, {"type": "f1", "value": 42.627167415555505}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (es)", "type": "mteb/mtop_intent", "config": "es", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 65.06004002668445}, {"type": "f1", "value": 44.90527231209402}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (fr)", "type": "mteb/mtop_intent", "config": "fr", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 59.42687128092702}, {"type": "f1", "value": 41.79584710899656}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (hi)", "type": "mteb/mtop_intent", "config": "hi", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 59.078522768017216}, {"type": "f1", "value": 40.398016878580734}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (th)", "type": "mteb/mtop_intent", "config": "th", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 43.750452079565996}, {"type": "f1", "value": 28.985320742729865}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (af)", "type": "mteb/amazon_massive_intent", "config": "af", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 47.59919300605245}, {"type": "f1", "value": 44.27505749600044}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (am)", "type": "mteb/amazon_massive_intent", "config": "am", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 31.56691324815064}, {"type": "f1", "value": 30.34952276390722}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ar)", "type": "mteb/amazon_massive_intent", "config": "ar", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 52.62945527908541}, {"type": "f1", "value": 49.689536347222386}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (az)", "type": "mteb/amazon_massive_intent", "config": "az", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 50.0941492938803}, {"type": "f1", "value": 48.47831879848094}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (bn)", "type": "mteb/amazon_massive_intent", "config": "bn", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 46.540013449899135}, {"type": "f1", "value": 44.25663324630171}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (cy)", "type": "mteb/amazon_massive_intent", "config": "cy", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 44.25689307330195}, {"type": "f1", "value": 42.06066077477426}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (da)", "type": "mteb/amazon_massive_intent", "config": "da", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 55.05716207128446}, {"type": "f1", "value": 52.41516089202158}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (de)", "type": "mteb/amazon_massive_intent", "config": "de", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 61.86953597848015}, {"type": "f1", "value": 58.45989820228606}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (el)", "type": "mteb/amazon_massive_intent", "config": "el", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 47.02084734364493}, {"type": "f1", "value": 45.21525882986924}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 69.24008069939475}, {"type": "f1", "value": 68.27971089998472}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (es)", "type": "mteb/amazon_massive_intent", "config": "es", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 62.53530598520511}, {"type": "f1", "value": 61.83588971206536}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fa)", "type": "mteb/amazon_massive_intent", "config": "fa", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 55.19166106254204}, {"type": "f1", "value": 52.335787325774}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fi)", "type": "mteb/amazon_massive_intent", "config": "fi", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 48.43308675184936}, {"type": "f1", "value": 45.841102061239184}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (fr)", "type": "mteb/amazon_massive_intent", "config": "fr", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 64.26698049764627}, {"type": "f1", "value": 62.25607481996241}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (he)", "type": "mteb/amazon_massive_intent", "config": "he", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 57.619367854741085}, {"type": "f1", "value": 54.93671211092237}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hi)", "type": "mteb/amazon_massive_intent", "config": "hi", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 57.53530598520511}, {"type": "f1", "value": 55.36413211751344}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hu)", "type": "mteb/amazon_massive_intent", "config": "hu", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 45.66913248150638}, {"type": "f1", "value": 42.52092657926257}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (hy)", "type": "mteb/amazon_massive_intent", "config": "hy", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 39.19973100201749}, {"type": "f1", "value": 37.194613407773566}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (id)", "type": "mteb/amazon_massive_intent", "config": "id", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 54.99663752521856}, {"type": "f1", "value": 53.875181150315356}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (is)", "type": "mteb/amazon_massive_intent", "config": "is", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 43.143913920645595}, {"type": "f1", "value": 41.756257561394456}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (it)", "type": "mteb/amazon_massive_intent", "config": "it", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 60.99529253530599}, {"type": "f1", "value": 59.103812128183705}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ja)", "type": "mteb/amazon_massive_intent", "config": "ja", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 64.29051782111634}, {"type": "f1", "value": 62.5268914542489}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (jv)", "type": "mteb/amazon_massive_intent", "config": "jv", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 43.69199731002017}, {"type": "f1", "value": 41.71651113018154}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ka)", "type": "mteb/amazon_massive_intent", "config": "ka", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 38.34566240753194}, {"type": "f1", "value": 36.935911015227894}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (km)", "type": "mteb/amazon_massive_intent", "config": "km", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 34.21654337592467}, {"type": "f1", "value": 32.067289455027755}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (kn)", "type": "mteb/amazon_massive_intent", "config": "kn", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 51.785474108944186}, {"type": "f1", "value": 49.29285691779668}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ko)", "type": "mteb/amazon_massive_intent", "config": "ko", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 59.58977807666444}, {"type": "f1", "value": 57.81630371862734}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (lv)", "type": "mteb/amazon_massive_intent", "config": "lv", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 46.53665097511768}, {"type": "f1", "value": 44.8386852929464}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ml)", "type": "mteb/amazon_massive_intent", "config": "ml", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 54.468728984532625}, {"type": "f1", "value": 52.13613631138983}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (mn)", "type": "mteb/amazon_massive_intent", "config": "mn", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 40.67921990585071}, {"type": "f1", "value": 39.87218130311539}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ms)", "type": "mteb/amazon_massive_intent", "config": "ms", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 51.2441156691325}, {"type": "f1", "value": 48.93351041227674}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (my)", "type": "mteb/amazon_massive_intent", "config": "my", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 31.76193678547411}, {"type": "f1", "value": 29.917012787908785}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nb)", "type": "mteb/amazon_massive_intent", "config": "nb", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 54.40820443846671}, {"type": "f1", "value": 51.232049156874396}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (nl)", "type": "mteb/amazon_massive_intent", "config": "nl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 60.8170813718897}, {"type": "f1", "value": 57.74887572270486}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pl)", "type": "mteb/amazon_massive_intent", "config": "pl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 60.067249495628786}, {"type": "f1", "value": 57.60151669462318}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (pt)", "type": "mteb/amazon_massive_intent", "config": "pt", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 62.73705447209146}, {"type": "f1", "value": 61.14377989075874}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ro)", "type": "mteb/amazon_massive_intent", "config": "ro", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 49.68392737054472}, {"type": "f1", "value": 48.07062918679129}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ru)", "type": "mteb/amazon_massive_intent", "config": "ru", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 60.85406859448555}, {"type": "f1", "value": 58.48852652838252}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sl)", "type": "mteb/amazon_massive_intent", "config": "sl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 48.58776059179556}, {"type": "f1", "value": 46.92163099241966}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sq)", "type": "mteb/amazon_massive_intent", "config": "sq", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 47.16879623402824}, {"type": "f1", "value": 45.8155066134247}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sv)", "type": "mteb/amazon_massive_intent", "config": "sv", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 62.41425689307329}, {"type": "f1", "value": 60.097954878192574}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (sw)", "type": "mteb/amazon_massive_intent", "config": "sw", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 45.97175521183591}, {"type": "f1", "value": 44.29275283000346}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ta)", "type": "mteb/amazon_massive_intent", "config": "ta", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 53.597848016139885}, {"type": "f1", "value": 51.54318966923094}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (te)", "type": "mteb/amazon_massive_intent", "config": "te", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 53.44653665097512}, {"type": "f1", "value": 51.60095623356469}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (th)", "type": "mteb/amazon_massive_intent", "config": "th", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 46.173503698722264}, {"type": "f1", "value": 46.311285276929105}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tl)", "type": "mteb/amazon_massive_intent", "config": "tl", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 49.47881640887693}, {"type": "f1", "value": 46.63989802589145}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (tr)", "type": "mteb/amazon_massive_intent", "config": "tr", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 58.02958977807666}, {"type": "f1", "value": 55.34728796730868}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (ur)", "type": "mteb/amazon_massive_intent", "config": "ur", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 39.26361802286483}, {"type": "f1", "value": 37.61201358829197}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (vi)", "type": "mteb/amazon_massive_intent", "config": "vi", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 52.15534633490249}, {"type": "f1", "value": 50.438951980623145}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-CN)", "type": "mteb/amazon_massive_intent", "config": "zh-CN", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 63.39946200403498}, {"type": "f1", "value": 62.152249150179664}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (zh-TW)", "type": "mteb/amazon_massive_intent", "config": "zh-TW", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 58.207800941492934}, {"type": "f1", "value": 58.318584465398104}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (af)", "type": "mteb/amazon_massive_scenario", "config": "af", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 58.069939475453936}, {"type": "f1", "value": 55.04073616892449}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (am)", "type": "mteb/amazon_massive_scenario", "config": "am", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 38.214525891055814}, {"type": "f1", "value": 36.42184260742777}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ar)", "type": "mteb/amazon_massive_scenario", "config": "ar", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 57.47141896435777}, {"type": "f1", "value": 57.22453431938479}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (az)", "type": "mteb/amazon_massive_scenario", "config": "az", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 54.37121721587089}, {"type": "f1", "value": 53.004976087120134}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (bn)", "type": "mteb/amazon_massive_scenario", "config": "bn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 52.71687962340283}, {"type": "f1", "value": 51.140151342341646}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (cy)", "type": "mteb/amazon_massive_scenario", "config": "cy", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 49.502353732347004}, {"type": "f1", "value": 45.74604753969847}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (da)", "type": "mteb/amazon_massive_scenario", "config": "da", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 64.25689307330195}, {"type": "f1", "value": 62.25355539317913}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (de)", "type": "mteb/amazon_massive_scenario", "config": "de", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 71.27774041694688}, {"type": "f1", "value": 70.26880477280841}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (el)", "type": "mteb/amazon_massive_scenario", "config": "el", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 52.420981842636195}, {"type": "f1", "value": 50.824547366213565}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.11230665770006}, {"type": "f1", "value": 73.00723710263364}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (es)", "type": "mteb/amazon_massive_scenario", "config": "es", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 67.04102219233356}, {"type": "f1", "value": 66.7904194512351}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fa)", "type": "mteb/amazon_massive_scenario", "config": "fa", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 60.1714862138534}, {"type": "f1", "value": 58.781208933846095}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fi)", "type": "mteb/amazon_massive_scenario", "config": "fi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 54.04841963685272}, {"type": "f1", "value": 51.185007148328545}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (fr)", "type": "mteb/amazon_massive_scenario", "config": "fr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 69.76462676529927}, {"type": "f1", "value": 68.85227238388136}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (he)", "type": "mteb/amazon_massive_scenario", "config": "he", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 62.84801613987895}, {"type": "f1", "value": 61.18395865529196}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hi)", "type": "mteb/amazon_massive_scenario", "config": "hi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 62.17888365837256}, {"type": "f1", "value": 60.40570575783401}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hu)", "type": "mteb/amazon_massive_scenario", "config": "hu", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 53.52051109616678}, {"type": "f1", "value": 51.210696278552014}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (hy)", "type": "mteb/amazon_massive_scenario", "config": "hy", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 45.94821788836584}, {"type": "f1", "value": 43.65062337089374}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (id)", "type": "mteb/amazon_massive_scenario", "config": "id", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 60.33288500336248}, {"type": "f1", "value": 59.50436947982156}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (is)", "type": "mteb/amazon_massive_scenario", "config": "is", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 50.09751176866174}, {"type": "f1", "value": 47.293838685239}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (it)", "type": "mteb/amazon_massive_scenario", "config": "it", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 66.49293880295897}, {"type": "f1", "value": 65.96586462307134}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ja)", "type": "mteb/amazon_massive_scenario", "config": "ja", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 68.35911230665769}, {"type": "f1", "value": 67.77840431764355}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (jv)", "type": "mteb/amazon_massive_scenario", "config": "jv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 50.585070611970416}, {"type": "f1", "value": 47.957277125670295}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ka)", "type": "mteb/amazon_massive_scenario", "config": "ka", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 42.76059179556153}, {"type": "f1", "value": 40.446327361325565}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (km)", "type": "mteb/amazon_massive_scenario", "config": "km", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 40.648957632817755}, {"type": "f1", "value": 37.231284508608276}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (kn)", "type": "mteb/amazon_massive_scenario", "config": "kn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 57.24613315400134}, {"type": "f1", "value": 55.14523425690653}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ko)", "type": "mteb/amazon_massive_scenario", "config": "ko", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 63.839946200403496}, {"type": "f1", "value": 62.6239063060589}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (lv)", "type": "mteb/amazon_massive_scenario", "config": "lv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 53.14391392064559}, {"type": "f1", "value": 50.08744471966442}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ml)", "type": "mteb/amazon_massive_scenario", "config": "ml", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 58.8399462004035}, {"type": "f1", "value": 57.586991117740794}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (mn)", "type": "mteb/amazon_massive_scenario", "config": "mn", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 44.81842636180229}, {"type": "f1", "value": 42.82813975084655}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ms)", "type": "mteb/amazon_massive_scenario", "config": "ms", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 58.90047074646939}, {"type": "f1", "value": 56.640503134745714}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (my)", "type": "mteb/amazon_massive_scenario", "config": "my", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 38.52051109616678}, {"type": "f1", "value": 36.504553927569454}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nb)", "type": "mteb/amazon_massive_scenario", "config": "nb", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 64.63685272360458}, {"type": "f1", "value": 62.88129994502907}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (nl)", "type": "mteb/amazon_massive_scenario", "config": "nl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 67.54203093476798}, {"type": "f1", "value": 66.02745142287087}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pl)", "type": "mteb/amazon_massive_scenario", "config": "pl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 64.00470746469402}, {"type": "f1", "value": 62.91845058355313}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (pt)", "type": "mteb/amazon_massive_scenario", "config": "pt", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 65.69939475453934}, {"type": "f1", "value": 65.37413822081011}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ro)", "type": "mteb/amazon_massive_scenario", "config": "ro", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 57.19905850706121}, {"type": "f1", "value": 55.08271383695852}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ru)", "type": "mteb/amazon_massive_scenario", "config": "ru", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 65.42367182246134}, {"type": "f1", "value": 64.61962307022019}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sl)", "type": "mteb/amazon_massive_scenario", "config": "sl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 55.147948890383326}, {"type": "f1", "value": 53.2933851469903}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sq)", "type": "mteb/amazon_massive_scenario", "config": "sq", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 55.679219905850715}, {"type": "f1", "value": 52.80159603468007}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sv)", "type": "mteb/amazon_massive_scenario", "config": "sv", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 69.42165433759246}, {"type": "f1", "value": 67.99984081248608}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (sw)", "type": "mteb/amazon_massive_scenario", "config": "sw", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 52.30329522528581}, {"type": "f1", "value": 50.10810382364662}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ta)", "type": "mteb/amazon_massive_scenario", "config": "ta", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 56.186953597848024}, {"type": "f1", "value": 55.51656586643505}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (te)", "type": "mteb/amazon_massive_scenario", "config": "te", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 58.019502353732356}, {"type": "f1", "value": 56.260726586358736}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (th)", "type": "mteb/amazon_massive_scenario", "config": "th", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 52.55548083389374}, {"type": "f1", "value": 51.139712264362714}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tl)", "type": "mteb/amazon_massive_scenario", "config": "tl", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 57.43443174176194}, {"type": "f1", "value": 55.76244076715635}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (tr)", "type": "mteb/amazon_massive_scenario", "config": "tr", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 61.55346334902488}, {"type": "f1", "value": 61.25819823057803}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (ur)", "type": "mteb/amazon_massive_scenario", "config": "ur", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 47.114996637525216}, {"type": "f1", "value": 45.20428169546973}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (vi)", "type": "mteb/amazon_massive_scenario", "config": "vi", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 56.83254875588434}, {"type": "f1", "value": 56.00919757601416}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-CN)", "type": "mteb/amazon_massive_scenario", "config": "zh-CN", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 69.57969065232012}, {"type": "f1", "value": 69.17378512156806}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (zh-TW)", "type": "mteb/amazon_massive_scenario", "config": "zh-TW", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 64.02488231338263}, {"type": "f1", "value": 64.09790488949963}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 29.71446786877363}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 28.003624498407547}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 31.29671894458151}, {"type": "mrr", "value": 32.44455140124599}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 6.127}, {"type": "map_at_10", "value": 13.047}, {"type": "map_at_100", "value": 15.754000000000001}, {"type": "map_at_1000", "value": 16.930999999999997}, {"type": "map_at_3", "value": 9.876999999999999}, {"type": "map_at_5", "value": 11.265}, {"type": "mrr_at_1", "value": 45.511}, {"type": "mrr_at_10", "value": 54.75600000000001}, {"type": "mrr_at_100", "value": 55.33}, {"type": "mrr_at_1000", "value": 55.374}, {"type": "mrr_at_3", "value": 53.147999999999996}, {"type": "mrr_at_5", "value": 53.952999999999996}, {"type": "ndcg_at_1", "value": 43.653}, {"type": "ndcg_at_10", "value": 33.936}, {"type": "ndcg_at_100", "value": 29.952}, {"type": "ndcg_at_1000", "value": 38.356}, {"type": "ndcg_at_3", "value": 40.018}, {"type": "ndcg_at_5", "value": 37.102000000000004}, {"type": "precision_at_1", "value": 45.511}, {"type": "precision_at_10", "value": 24.768}, {"type": "precision_at_100", "value": 7.13}, {"type": "precision_at_1000", "value": 1.928}, {"type": "precision_at_3", "value": 37.461}, {"type": "precision_at_5", "value": 31.703}, {"type": "recall_at_1", "value": 6.127}, {"type": "recall_at_10", "value": 16.512999999999998}, {"type": "recall_at_100", "value": 29.057}, {"type": "recall_at_1000", "value": 59.25899999999999}, {"type": "recall_at_3", "value": 10.940999999999999}, {"type": "recall_at_5", "value": 12.925}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 32.228}, {"type": "map_at_10", "value": 47.56}, {"type": "map_at_100", "value": 48.539}, {"type": "map_at_1000", "value": 48.567}, {"type": "map_at_3", "value": 43.214999999999996}, {"type": "map_at_5", "value": 45.799}, {"type": "mrr_at_1", "value": 36.53}, {"type": "mrr_at_10", "value": 50.004000000000005}, {"type": "mrr_at_100", "value": 50.737}, {"type": "mrr_at_1000", "value": 50.758}, {"type": "mrr_at_3", "value": 46.543}, {"type": "mrr_at_5", "value": 48.672}, {"type": "ndcg_at_1", "value": 36.501}, {"type": "ndcg_at_10", "value": 55.103}, {"type": "ndcg_at_100", "value": 59.156}, {"type": "ndcg_at_1000", "value": 59.821999999999996}, {"type": "ndcg_at_3", "value": 47.089}, {"type": "ndcg_at_5", "value": 51.35999999999999}, {"type": "precision_at_1", "value": 36.501}, {"type": "precision_at_10", "value": 9.046999999999999}, {"type": "precision_at_100", "value": 1.13}, {"type": "precision_at_1000", "value": 0.11900000000000001}, {"type": "precision_at_3", "value": 21.398}, {"type": "precision_at_5", "value": 15.307}, {"type": "recall_at_1", "value": 32.228}, {"type": "recall_at_10", "value": 75.608}, {"type": "recall_at_100", "value": 93.062}, {"type": "recall_at_1000", "value": 98.059}, {"type": "recall_at_3", "value": 55.021}, {"type": "recall_at_5", "value": 64.873}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 70.623}, {"type": "map_at_10", "value": 84.705}, {"type": "map_at_100", "value": 85.333}, {"type": "map_at_1000", "value": 85.348}, {"type": "map_at_3", "value": 81.736}, {"type": "map_at_5", "value": 83.616}, {"type": "mrr_at_1", "value": 81.28}, {"type": "mrr_at_10", "value": 87.518}, {"type": "mrr_at_100", "value": 87.619}, {"type": "mrr_at_1000", "value": 87.62}, {"type": "mrr_at_3", "value": 86.545}, {"type": "mrr_at_5", "value": 87.238}, {"type": "ndcg_at_1", "value": 81.28999999999999}, {"type": "ndcg_at_10", "value": 88.412}, {"type": "ndcg_at_100", "value": 89.603}, {"type": "ndcg_at_1000", "value": 89.696}, {"type": "ndcg_at_3", "value": 85.563}, {"type": "ndcg_at_5", "value": 87.17}, {"type": "precision_at_1", "value": 81.28999999999999}, {"type": "precision_at_10", "value": 13.439}, {"type": "precision_at_100", "value": 1.5310000000000001}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.437}, {"type": "precision_at_5", "value": 24.662}, {"type": "recall_at_1", "value": 70.623}, {"type": "recall_at_10", "value": 95.531}, {"type": "recall_at_100", "value": 99.58}, {"type": "recall_at_1000", "value": 99.978}, {"type": "recall_at_3", "value": 87.368}, {"type": "recall_at_5", "value": 91.898}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 49.53241309124786}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 59.712004482915994}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 5.313}, {"type": "map_at_10", "value": 13.447000000000001}, {"type": "map_at_100", "value": 15.491}, {"type": "map_at_1000", "value": 15.784999999999998}, {"type": "map_at_3", "value": 9.58}, {"type": "map_at_5", "value": 11.562}, {"type": "mrr_at_1", "value": 26.200000000000003}, {"type": "mrr_at_10", "value": 37.212}, {"type": "mrr_at_100", "value": 38.190000000000005}, {"type": "mrr_at_1000", "value": 38.242}, {"type": "mrr_at_3", "value": 34.067}, {"type": "mrr_at_5", "value": 35.862}, {"type": "ndcg_at_1", "value": 26.200000000000003}, {"type": "ndcg_at_10", "value": 21.979000000000003}, {"type": "ndcg_at_100", "value": 29.726999999999997}, {"type": "ndcg_at_1000", "value": 34.766000000000005}, {"type": "ndcg_at_3", "value": 21.16}, {"type": "ndcg_at_5", "value": 18.478}, {"type": "precision_at_1", "value": 26.200000000000003}, {"type": "precision_at_10", "value": 11.25}, {"type": "precision_at_100", "value": 2.241}, {"type": "precision_at_1000", "value": 0.345}, {"type": "precision_at_3", "value": 19.633}, {"type": "precision_at_5", "value": 16.14}, {"type": "recall_at_1", "value": 5.313}, {"type": "recall_at_10", "value": 22.808}, {"type": "recall_at_100", "value": 45.540000000000006}, {"type": "recall_at_1000", "value": 70.043}, {"type": "recall_at_3", "value": 11.932}, {"type": "recall_at_5", "value": 16.347}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.95540796619258}, {"type": "cos_sim_spearman", "value": 76.49462277620303}, {"type": "euclidean_pearson", "value": 71.67643435507317}, {"type": "euclidean_spearman", "value": 76.4915921108082}, {"type": "manhattan_pearson", "value": 71.71412560074847}, {"type": "manhattan_spearman", "value": 76.46738312094736}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.48773267615617}, {"type": "cos_sim_spearman", "value": 74.99867664033701}, {"type": "euclidean_pearson", "value": 76.0885798115032}, {"type": "euclidean_spearman", "value": 74.99438208715942}, {"type": "manhattan_pearson", "value": 76.09382557464033}, {"type": "manhattan_spearman", "value": 74.96139353538533}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.19022560804167}, {"type": "cos_sim_spearman", "value": 87.9128142106699}, {"type": "euclidean_pearson", "value": 85.51390183763914}, {"type": "euclidean_spearman", "value": 87.89995488057309}, {"type": "manhattan_pearson", "value": 85.44945034816052}, {"type": "manhattan_spearman", "value": 87.791458898378}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.17877898640924}, {"type": "cos_sim_spearman", "value": 82.25544088807465}, {"type": "euclidean_pearson", "value": 82.36395988835416}, {"type": "euclidean_spearman", "value": 82.26359924974219}, {"type": "manhattan_pearson", "value": 82.39219808999891}, {"type": "manhattan_spearman", "value": 82.27757404868157}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.66865350602554}, {"type": "cos_sim_spearman", "value": 87.87150169810872}, {"type": "euclidean_pearson", "value": 85.41520650056647}, {"type": "euclidean_spearman", "value": 87.86636613654022}, {"type": "manhattan_pearson", "value": 85.38710485867502}, {"type": "manhattan_spearman", "value": 87.83513424575301}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.75527643407175}, {"type": "cos_sim_spearman", "value": 80.9239008594745}, {"type": "euclidean_pearson", "value": 79.37682746800515}, {"type": "euclidean_spearman", "value": 80.91978947194092}, {"type": "manhattan_pearson", "value": 79.38884189990698}, {"type": "manhattan_spearman", "value": 80.91771608341014}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ko-ko)", "type": "mteb/sts17-crosslingual-sts", "config": "ko-ko", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 80.24344311909609}, {"type": "cos_sim_spearman", "value": 80.78933956176022}, {"type": "euclidean_pearson", "value": 76.95229806538676}, {"type": "euclidean_spearman", "value": 80.79706724032172}, {"type": "manhattan_pearson", "value": 76.90212135774246}, {"type": "manhattan_spearman", "value": 80.68727415384441}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (ar-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "ar-ar", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 77.33891809228084}, {"type": "cos_sim_spearman", "value": 79.37912430317627}, {"type": "euclidean_pearson", "value": 72.56919843951036}, {"type": "euclidean_spearman", "value": 79.3091436905072}, {"type": "manhattan_pearson", "value": 72.4282811588754}, {"type": "manhattan_spearman", "value": 78.90144894538078}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-ar)", "type": "mteb/sts17-crosslingual-sts", "config": "en-ar", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 59.68908656739356}, {"type": "cos_sim_spearman", "value": 58.76110210983758}, {"type": "euclidean_pearson", "value": 59.14749159577439}, {"type": "euclidean_spearman", "value": 59.015997032145016}, {"type": "manhattan_pearson", "value": 57.907675340322676}, {"type": "manhattan_spearman", "value": 57.07751173022352}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-de)", "type": "mteb/sts17-crosslingual-sts", "config": "en-de", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 75.53325164873934}, {"type": "cos_sim_spearman", "value": 76.13104388846271}, {"type": "euclidean_pearson", "value": 74.61931031522006}, {"type": "euclidean_spearman", "value": 75.96875166459931}, {"type": "manhattan_pearson", "value": 74.82154350849251}, {"type": "manhattan_spearman", "value": 76.64455924104236}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.4228376590724}, {"type": "cos_sim_spearman", "value": 87.22764976624408}, {"type": "euclidean_pearson", "value": 81.94975688107507}, {"type": "euclidean_spearman", "value": 87.19193932664932}, {"type": "manhattan_pearson", "value": 82.0043964628936}, {"type": "manhattan_spearman", "value": 87.09130430957818}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-tr)", "type": "mteb/sts17-crosslingual-sts", "config": "en-tr", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 57.5627552601949}, {"type": "cos_sim_spearman", "value": 55.5263144563657}, {"type": "euclidean_pearson", "value": 57.00569241610482}, {"type": "euclidean_spearman", "value": 55.35291811479459}, {"type": "manhattan_pearson", "value": 56.99656284623506}, {"type": "manhattan_spearman", "value": 55.593673744709946}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-en)", "type": "mteb/sts17-crosslingual-sts", "config": "es-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 69.93801311909735}, {"type": "cos_sim_spearman", "value": 72.2581115470475}, {"type": "euclidean_pearson", "value": 68.24881290268563}, {"type": "euclidean_spearman", "value": 72.60813652864522}, {"type": "manhattan_pearson", "value": 67.86369874088834}, {"type": "manhattan_spearman", "value": 71.92346382988023}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (es-es)", "type": "mteb/sts17-crosslingual-sts", "config": "es-es", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.20555264114785}, {"type": "cos_sim_spearman", "value": 85.0588060013836}, {"type": "euclidean_pearson", "value": 81.78229090166155}, {"type": "euclidean_spearman", "value": 85.09687374900614}, {"type": "manhattan_pearson", "value": 81.77449099980244}, {"type": "manhattan_spearman", "value": 84.70331476222177}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (fr-en)", "type": "mteb/sts17-crosslingual-sts", "config": "fr-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 73.786793911605}, {"type": "cos_sim_spearman", "value": 75.63094397551554}, {"type": "euclidean_pearson", "value": 71.64292842519251}, {"type": "euclidean_spearman", "value": 75.60215267384011}, {"type": "manhattan_pearson", "value": 72.2124078037642}, {"type": "manhattan_spearman", "value": 76.34546028465175}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (it-en)", "type": "mteb/sts17-crosslingual-sts", "config": "it-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 69.62139987106455}, {"type": "cos_sim_spearman", "value": 71.35872226722493}, {"type": "euclidean_pearson", "value": 68.50103697766141}, {"type": "euclidean_spearman", "value": 71.24590187948473}, {"type": "manhattan_pearson", "value": 68.89236562525663}, {"type": "manhattan_spearman", "value": 71.77994400789173}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (nl-en)", "type": "mteb/sts17-crosslingual-sts", "config": "nl-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 71.62728174871292}, {"type": "cos_sim_spearman", "value": 71.98655715409397}, {"type": "euclidean_pearson", "value": 70.27026741609356}, {"type": "euclidean_spearman", "value": 72.14004669693777}, {"type": "manhattan_pearson", "value": 70.46335140108751}, {"type": "manhattan_spearman", "value": 72.6638254374311}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 71.10248717637424}, {"type": "cos_sim_spearman", "value": 68.5905931564714}, {"type": "euclidean_pearson", "value": 71.23290000423759}, {"type": "euclidean_spearman", "value": 68.6419513130457}, {"type": "manhattan_pearson", "value": 71.6886015250234}, {"type": "manhattan_spearman", "value": 69.47543660368697}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de)", "type": "mteb/sts22-crosslingual-sts", "config": "de", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 59.010555056244776}, {"type": "cos_sim_spearman", "value": 60.121771179899255}, {"type": "euclidean_pearson", "value": 53.04527785573465}, {"type": "euclidean_spearman", "value": 60.121771179899255}, {"type": "manhattan_pearson", "value": 52.931480071124234}, {"type": "manhattan_spearman", "value": 60.03868409331775}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es)", "type": "mteb/sts22-crosslingual-sts", "config": "es", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 70.6833028374664}, {"type": "cos_sim_spearman", "value": 68.57396263856863}, {"type": "euclidean_pearson", "value": 68.30905084522986}, {"type": "euclidean_spearman", "value": 68.57396263856863}, {"type": "manhattan_pearson", "value": 70.91400657516918}, {"type": "manhattan_spearman", "value": 72.72240857808112}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl)", "type": "mteb/sts22-crosslingual-sts", "config": "pl", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 36.948290734279645}, {"type": "cos_sim_spearman", "value": 42.07722031011005}, {"type": "euclidean_pearson", "value": 22.539446972018467}, {"type": "euclidean_spearman", "value": 42.07722031011005}, {"type": "manhattan_pearson", "value": 24.119402246951786}, {"type": "manhattan_spearman", "value": 45.80525501822569}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (tr)", "type": "mteb/sts22-crosslingual-sts", "config": "tr", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 66.97840719036533}, {"type": "cos_sim_spearman", "value": 66.62430648804775}, {"type": "euclidean_pearson", "value": 66.89526587772023}, {"type": "euclidean_spearman", "value": 66.62430648804775}, {"type": "manhattan_pearson", "value": 68.6929895225091}, {"type": "manhattan_spearman", "value": 68.91772708432867}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (ar)", "type": "mteb/sts22-crosslingual-sts", "config": "ar", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 56.65098289103698}, {"type": "cos_sim_spearman", "value": 57.436674670689214}, {"type": "euclidean_pearson", "value": 51.79149892785239}, {"type": "euclidean_spearman", "value": 57.436674670689214}, {"type": "manhattan_pearson", "value": 52.64807953938707}, {"type": "manhattan_spearman", "value": 58.94583987372767}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (ru)", "type": "mteb/sts22-crosslingual-sts", "config": "ru", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 60.669531297510225}, {"type": "cos_sim_spearman", "value": 61.71342510003327}, {"type": "euclidean_pearson", "value": 55.821871433553504}, {"type": "euclidean_spearman", "value": 61.71342510003327}, {"type": "manhattan_pearson", "value": 57.77073441351117}, {"type": "manhattan_spearman", "value": 65.20759033207}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh)", "type": "mteb/sts22-crosslingual-sts", "config": "zh", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 64.34728960310699}, {"type": "cos_sim_spearman", "value": 64.03565302589584}, {"type": "euclidean_pearson", "value": 61.958942333930544}, {"type": "euclidean_spearman", "value": 64.03565302589584}, {"type": "manhattan_pearson", "value": 64.65072672727923}, {"type": "manhattan_spearman", "value": 67.82569969943107}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr)", "type": "mteb/sts22-crosslingual-sts", "config": "fr", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.47120815594353}, {"type": "cos_sim_spearman", "value": 81.46916544955101}, {"type": "euclidean_pearson", "value": 79.21753533489019}, {"type": "euclidean_spearman", "value": 81.46916544955101}, {"type": "manhattan_pearson", "value": 78.26605518839271}, {"type": "manhattan_spearman", "value": 81.29749169339514}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-en)", "type": "mteb/sts22-crosslingual-sts", "config": "de-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 63.31467231933632}, {"type": "cos_sim_spearman", "value": 53.36160506603274}, {"type": "euclidean_pearson", "value": 64.98434169416196}, {"type": "euclidean_spearman", "value": 53.36160506603274}, {"type": "manhattan_pearson", "value": 69.6837006629638}, {"type": "manhattan_spearman", "value": 60.85384324700893}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-en)", "type": "mteb/sts22-crosslingual-sts", "config": "es-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.99425127770438}, {"type": "cos_sim_spearman", "value": 77.41308957007035}, {"type": "euclidean_pearson", "value": 79.69441265626801}, {"type": "euclidean_spearman", "value": 77.41308957007035}, {"type": "manhattan_pearson", "value": 80.3726291667624}, {"type": "manhattan_spearman", "value": 79.0414050644631}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (it)", "type": "mteb/sts22-crosslingual-sts", "config": "it", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 79.13469287716659}, {"type": "cos_sim_spearman", "value": 79.27976881582065}, {"type": "euclidean_pearson", "value": 77.65964425780172}, {"type": "euclidean_spearman", "value": 79.27976881582065}, {"type": "manhattan_pearson", "value": 77.64158710257945}, {"type": "manhattan_spearman", "value": 79.22242281895944}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (pl-en)", "type": "mteb/sts22-crosslingual-sts", "config": "pl-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 76.303314995599}, {"type": "cos_sim_spearman", "value": 77.4991345414335}, {"type": "euclidean_pearson", "value": 74.88826621426401}, {"type": "euclidean_spearman", "value": 77.4991345414335}, {"type": "manhattan_pearson", "value": 77.70223488989319}, {"type": "manhattan_spearman", "value": 79.69746987627822}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (zh-en)", "type": "mteb/sts22-crosslingual-sts", "config": "zh-en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 70.87814957197239}, {"type": "cos_sim_spearman", "value": 69.86785751801642}, {"type": "euclidean_pearson", "value": 68.68630146548654}, {"type": "euclidean_spearman", "value": 69.8615799070054}, {"type": "manhattan_pearson", "value": 61.83743315022061}, {"type": "manhattan_spearman", "value": 64.35346450347738}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (es-it)", "type": "mteb/sts22-crosslingual-sts", "config": "es-it", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 74.1484689923211}, {"type": "cos_sim_spearman", "value": 74.69046355179742}, {"type": "euclidean_pearson", "value": 73.03951899271793}, {"type": "euclidean_spearman", "value": 74.69820632954205}, {"type": "manhattan_pearson", "value": 73.36810146930709}, {"type": "manhattan_spearman", "value": 75.33154135287258}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-fr)", "type": "mteb/sts22-crosslingual-sts", "config": "de-fr", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 51.43125921362742}, {"type": "cos_sim_spearman", "value": 58.25341239774093}, {"type": "euclidean_pearson", "value": 48.00689582162098}, {"type": "euclidean_spearman", "value": 58.533194841668426}, {"type": "manhattan_pearson", "value": 46.11721778230745}, {"type": "manhattan_spearman", "value": 55.026889052448134}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (de-pl)", "type": "mteb/sts22-crosslingual-sts", "config": "de-pl", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 40.066205533538046}, {"type": "cos_sim_spearman", "value": 48.46991890841381}, {"type": "euclidean_pearson", "value": 42.29606506858651}, {"type": "euclidean_spearman", "value": 48.34674249441531}, {"type": "manhattan_pearson", "value": 41.70680990555484}, {"type": "manhattan_spearman", "value": 47.54609580342499}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (fr-pl)", "type": "mteb/sts22-crosslingual-sts", "config": "fr-pl", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.26527545520592}, {"type": "cos_sim_spearman", "value": 73.24670207647144}, {"type": "euclidean_pearson", "value": 81.78699781584893}, {"type": "euclidean_spearman", "value": 73.24670207647144}, {"type": "manhattan_pearson", "value": 83.14172292187807}, {"type": "manhattan_spearman", "value": 73.24670207647144}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.51438108053523}, {"type": "cos_sim_spearman", "value": 81.9481311864648}, {"type": "euclidean_pearson", "value": 78.6683040592179}, {"type": "euclidean_spearman", "value": 81.9535649926177}, {"type": "manhattan_pearson", "value": 78.65396325536754}, {"type": "manhattan_spearman", "value": 81.96918240343872}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 80.6689275068653}, {"type": "mrr", "value": 95.021337594867}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 55.193999999999996}, {"type": "map_at_10", "value": 65.814}, {"type": "map_at_100", "value": 66.428}, {"type": "map_at_1000", "value": 66.447}, {"type": "map_at_3", "value": 63.304}, {"type": "map_at_5", "value": 64.64}, {"type": "mrr_at_1", "value": 57.99999999999999}, {"type": "mrr_at_10", "value": 66.957}, {"type": "mrr_at_100", "value": 67.405}, {"type": "mrr_at_1000", "value": 67.422}, {"type": "mrr_at_3", "value": 65.0}, {"type": "mrr_at_5", "value": 66.183}, {"type": "ndcg_at_1", "value": 57.99999999999999}, {"type": "ndcg_at_10", "value": 70.523}, {"type": "ndcg_at_100", "value": 72.987}, {"type": "ndcg_at_1000", "value": 73.605}, {"type": "ndcg_at_3", "value": 66.268}, {"type": "ndcg_at_5", "value": 68.27600000000001}, {"type": "precision_at_1", "value": 57.99999999999999}, {"type": "precision_at_10", "value": 9.467}, {"type": "precision_at_100", "value": 1.073}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 26.444000000000003}, {"type": "precision_at_5", "value": 17.2}, {"type": "recall_at_1", "value": 55.193999999999996}, {"type": "recall_at_10", "value": 83.52199999999999}, {"type": "recall_at_100", "value": 94.5}, {"type": "recall_at_1000", "value": 99.667}, {"type": "recall_at_3", "value": 71.989}, {"type": "recall_at_5", "value": 77.31700000000001}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.73465346534654}, {"type": "cos_sim_ap", "value": 92.91719494015508}, {"type": "cos_sim_f1", "value": 86.46200301962756}, {"type": "cos_sim_precision", "value": 87.03140830800406}, {"type": "cos_sim_recall", "value": 85.9}, {"type": "dot_accuracy", "value": 99.73663366336633}, {"type": "dot_ap", "value": 92.90802848215259}, {"type": "dot_f1", "value": 86.46200301962756}, {"type": "dot_precision", "value": 87.03140830800406}, {"type": "dot_recall", "value": 85.9}, {"type": "euclidean_accuracy", "value": 99.73465346534654}, {"type": "euclidean_ap", "value": 92.91627363446204}, {"type": "euclidean_f1", "value": 86.43469490670702}, {"type": "euclidean_precision", "value": 87.18209562563581}, {"type": "euclidean_recall", "value": 85.7}, {"type": "manhattan_accuracy", "value": 99.73663366336633}, {"type": "manhattan_ap", "value": 92.90219877406929}, {"type": "manhattan_f1", "value": 86.31471040492056}, {"type": "manhattan_precision", "value": 88.53838065194533}, {"type": "manhattan_recall", "value": 84.2}, {"type": "max_accuracy", "value": 99.73663366336633}, {"type": "max_ap", "value": 92.91719494015508}, {"type": "max_f1", "value": 86.46200301962756}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 60.73098998430779}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 34.64256206757585}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 54.749150614295694}, {"type": "mrr", "value": 55.78880984211867}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 28.863577054305907}, {"type": "cos_sim_spearman", "value": 27.538596944829774}, {"type": "dot_pearson", "value": 28.93043755116643}, {"type": "dot_spearman", "value": 27.733110516733987}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.22899999999999998}, {"type": "map_at_10", "value": 2.078}, {"type": "map_at_100", "value": 12.024}, {"type": "map_at_1000", "value": 29.036}, {"type": "map_at_3", "value": 0.681}, {"type": "map_at_5", "value": 1.083}, {"type": "mrr_at_1", "value": 86.0}, {"type": "mrr_at_10", "value": 92.667}, {"type": "mrr_at_100", "value": 92.667}, {"type": "mrr_at_1000", "value": 92.667}, {"type": "mrr_at_3", "value": 92.667}, {"type": "mrr_at_5", "value": 92.667}, {"type": "ndcg_at_1", "value": 82.0}, {"type": "ndcg_at_10", "value": 80.746}, {"type": "ndcg_at_100", "value": 61.090999999999994}, {"type": "ndcg_at_1000", "value": 55.034000000000006}, {"type": "ndcg_at_3", "value": 82.419}, {"type": "ndcg_at_5", "value": 81.018}, {"type": "precision_at_1", "value": 86.0}, {"type": "precision_at_10", "value": 86.2}, {"type": "precision_at_100", "value": 62.68}, {"type": "precision_at_1000", "value": 24.032}, {"type": "precision_at_3", "value": 88.667}, {"type": "precision_at_5", "value": 86.0}, {"type": "recall_at_1", "value": 0.22899999999999998}, {"type": "recall_at_10", "value": 2.263}, {"type": "recall_at_100", "value": 15.238999999999999}, {"type": "recall_at_1000", "value": 51.937}, {"type": "recall_at_3", "value": 0.719}, {"type": "recall_at_5", "value": 1.15}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (sqi-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "sqi-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 19.400000000000002}, {"type": "f1", "value": 15.386076064970075}, {"type": "precision", "value": 14.253878834615676}, {"type": "recall", "value": 19.400000000000002}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fry-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fry-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 42.19653179190752}, {"type": "f1", "value": 37.726396917148364}, {"type": "precision", "value": 36.14643545279384}, {"type": "recall", "value": 42.19653179190752}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kur-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kur-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 18.536585365853657}, {"type": "f1", "value": 13.512010347376199}, {"type": "precision", "value": 12.034068912117693}, {"type": "recall", "value": 18.536585365853657}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tur-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tur-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 81.69999999999999}, {"type": "f1", "value": 77.37888888888888}, {"type": "precision", "value": 75.49583333333332}, {"type": "recall", "value": 81.69999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (deu-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "deu-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 97.39999999999999}, {"type": "f1", "value": 96.56666666666666}, {"type": "precision", "value": 96.16666666666667}, {"type": "recall", "value": 97.39999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nld-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nld-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 90.0}, {"type": "f1", "value": 87.22333333333333}, {"type": "precision", "value": 85.89166666666667}, {"type": "recall", "value": 90.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ron-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ron-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 64.7}, {"type": "f1", "value": 59.10904761904763}, {"type": "precision", "value": 56.91968253968254}, {"type": "recall", "value": 64.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ang-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ang-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 38.80597014925373}, {"type": "f1", "value": 30.890784174366264}, {"type": "precision", "value": 28.327114427860696}, {"type": "recall", "value": 38.80597014925373}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ido-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ido-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 53.900000000000006}, {"type": "f1", "value": 48.294138583638585}, {"type": "precision", "value": 46.333495670995674}, {"type": "recall", "value": 53.900000000000006}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (jav-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "jav-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 11.707317073170733}, {"type": "f1", "value": 8.999999999999998}, {"type": "precision", "value": 8.175377468060395}, {"type": "recall", "value": 11.707317073170733}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (isl-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "isl-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 15.9}, {"type": "f1", "value": 12.451226269430602}, {"type": "precision", "value": 11.404807799760325}, {"type": "recall", "value": 15.9}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (slv-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "slv-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 41.919805589307416}, {"type": "f1", "value": 35.880619060297064}, {"type": "precision", "value": 33.77682308241239}, {"type": "recall", "value": 41.919805589307416}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cym-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cym-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 10.956521739130434}, {"type": "f1", "value": 9.098715976676996}, {"type": "precision", "value": 8.659935858401333}, {"type": "recall", "value": 10.956521739130434}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kaz-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kaz-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 11.652173913043478}, {"type": "f1", "value": 9.154324883225136}, {"type": "precision", "value": 8.505898125360801}, {"type": "recall", "value": 11.652173913043478}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (est-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "est-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 9.700000000000001}, {"type": "f1", "value": 7.431679431679432}, {"type": "precision", "value": 6.799925118740907}, {"type": "recall", "value": 9.700000000000001}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (heb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "heb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 77.5}, {"type": "f1", "value": 72.39999999999999}, {"type": "precision", "value": 70.13444444444444}, {"type": "recall", "value": 77.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (gla-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "gla-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 5.548854041013269}, {"type": "f1", "value": 4.233155465362944}, {"type": "precision", "value": 3.948150869646547}, {"type": "recall", "value": 5.548854041013269}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mar-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mar-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 73.5}, {"type": "f1", "value": 67.35333333333332}, {"type": "precision", "value": 64.63666666666666}, {"type": "recall", "value": 73.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 27.700000000000003}, {"type": "f1", "value": 21.152765495941964}, {"type": "precision", "value": 19.27832403707404}, {"type": "recall", "value": 27.700000000000003}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bel-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bel-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 48.1}, {"type": "f1", "value": 41.21001443001443}, {"type": "precision", "value": 38.628495670995676}, {"type": "recall", "value": 48.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pms-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pms-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 40.0}, {"type": "f1", "value": 34.32060003488575}, {"type": "precision", "value": 32.32134353741497}, {"type": "recall", "value": 40.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (gle-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "gle-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 6.800000000000001}, {"type": "f1", "value": 4.3954389450190465}, {"type": "precision", "value": 3.893838027469606}, {"type": "recall", "value": 6.800000000000001}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pes-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pes-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 51.800000000000004}, {"type": "f1", "value": 45.04222943722944}, {"type": "precision", "value": 42.541984126984126}, {"type": "recall", "value": 51.800000000000004}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nob-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nob-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 83.1}, {"type": "f1", "value": 79.20675324675324}, {"type": "precision", "value": 77.44944444444444}, {"type": "recall", "value": 83.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bul-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bul-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 66.8}, {"type": "f1", "value": 60.25746031746031}, {"type": "precision", "value": 57.55250000000001}, {"type": "recall", "value": 66.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cbk-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cbk-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 63.6}, {"type": "f1", "value": 56.73421356421356}, {"type": "precision", "value": 54.02218253968254}, {"type": "recall", "value": 63.6}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hun-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hun-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 17.599999999999998}, {"type": "f1", "value": 13.17699134199134}, {"type": "precision", "value": 11.77444805194805}, {"type": "recall", "value": 17.599999999999998}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (uig-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "uig-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 2.0}, {"type": "f1", "value": 1.3126923076923078}, {"type": "precision", "value": 1.104952380952381}, {"type": "recall", "value": 2.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (rus-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "rus-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 88.3}, {"type": "f1", "value": 84.96333333333334}, {"type": "precision", "value": 83.38333333333333}, {"type": "recall", "value": 88.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (spa-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "spa-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 94.69999999999999}, {"type": "f1", "value": 93.12333333333333}, {"type": "precision", "value": 92.375}, {"type": "recall", "value": 94.69999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hye-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hye-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 0.6738544474393532}, {"type": "f1", "value": 0.3690849566291394}, {"type": "precision", "value": 0.3305452159899599}, {"type": "recall", "value": 0.6738544474393532}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tel-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tel-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 71.7948717948718}, {"type": "f1", "value": 65.37037037037037}, {"type": "precision", "value": 62.46438746438747}, {"type": "recall", "value": 71.7948717948718}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (afr-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "afr-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 56.699999999999996}, {"type": "f1", "value": 50.58054945054945}, {"type": "precision", "value": 48.313047619047616}, {"type": "recall", "value": 56.699999999999996}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mon-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mon-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 13.863636363636363}, {"type": "f1", "value": 10.948429096156369}, {"type": "precision", "value": 10.227287994137523}, {"type": "recall", "value": 13.863636363636363}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (arz-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "arz-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 62.473794549266245}, {"type": "f1", "value": 56.04172906059699}, {"type": "precision", "value": 53.26694619147448}, {"type": "recall", "value": 62.473794549266245}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hrv-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hrv-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 40.0}, {"type": "f1", "value": 34.62948179271708}, {"type": "precision", "value": 32.699030910609864}, {"type": "recall", "value": 40.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nov-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nov-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 60.311284046692606}, {"type": "f1", "value": 54.06182447038479}, {"type": "precision", "value": 51.757921067259595}, {"type": "recall", "value": 60.311284046692606}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (gsw-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "gsw-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 43.58974358974359}, {"type": "f1", "value": 37.042359350051655}, {"type": "precision", "value": 34.75783475783476}, {"type": "recall", "value": 43.58974358974359}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nds-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nds-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 56.49999999999999}, {"type": "f1", "value": 49.471269841269844}, {"type": "precision", "value": 46.742182539682545}, {"type": "recall", "value": 56.49999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ukr-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ukr-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 71.5}, {"type": "f1", "value": 65.32880952380951}, {"type": "precision", "value": 62.71261904761904}, {"type": "recall", "value": 71.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (uzb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "uzb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 11.448598130841122}, {"type": "f1", "value": 7.861361294691689}, {"type": "precision", "value": 6.961045509526818}, {"type": "recall", "value": 11.448598130841122}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lit-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lit-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 13.5}, {"type": "f1", "value": 10.448586132968154}, {"type": "precision", "value": 9.624691955878397}, {"type": "recall", "value": 13.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ina-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ina-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 82.19999999999999}, {"type": "f1", "value": 78.25366946778712}, {"type": "precision", "value": 76.54291666666667}, {"type": "recall", "value": 82.19999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lfn-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lfn-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 53.5}, {"type": "f1", "value": 47.48505411255411}, {"type": "precision", "value": 45.29801587301587}, {"type": "recall", "value": 53.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (zsm-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "zsm-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 61.1}, {"type": "f1", "value": 54.60758056758057}, {"type": "precision", "value": 52.16455433455434}, {"type": "recall", "value": 61.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ita-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ita-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 85.1}, {"type": "f1", "value": 81.98506715506716}, {"type": "precision", "value": 80.64754901960784}, {"type": "recall", "value": 85.1}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cmn-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cmn-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 89.2}, {"type": "f1", "value": 86.13333333333333}, {"type": "precision", "value": 84.65}, {"type": "recall", "value": 89.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (lvs-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "lvs-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 13.600000000000001}, {"type": "f1", "value": 10.721816580317723}, {"type": "precision", "value": 9.97922024538847}, {"type": "recall", "value": 13.600000000000001}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (glg-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "glg-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 79.0}, {"type": "f1", "value": 74.2652380952381}, {"type": "precision", "value": 72.18690476190476}, {"type": "recall", "value": 79.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ceb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ceb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 12.833333333333332}, {"type": "f1", "value": 10.45993265993266}, {"type": "precision", "value": 9.849548907882243}, {"type": "recall", "value": 12.833333333333332}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bre-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bre-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 8.3}, {"type": "f1", "value": 5.457311371692176}, {"type": "precision", "value": 4.8466941508148595}, {"type": "recall", "value": 8.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ben-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ben-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 26.3}, {"type": "f1", "value": 20.851341154819416}, {"type": "precision", "value": 19.1173617945522}, {"type": "recall", "value": 26.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (swg-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "swg-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 41.964285714285715}, {"type": "f1", "value": 36.38605442176871}, {"type": "precision", "value": 34.523809523809526}, {"type": "recall", "value": 41.964285714285715}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (arq-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "arq-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 26.454445664105382}, {"type": "f1", "value": 20.67692765826684}, {"type": "precision", "value": 18.684070229075715}, {"type": "recall", "value": 26.454445664105382}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kab-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kab-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 2.8000000000000003}, {"type": "f1", "value": 1.9487240537240536}, {"type": "precision", "value": 1.7766582325720255}, {"type": "recall", "value": 2.8000000000000003}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fra-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fra-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 91.5}, {"type": "f1", "value": 89.39}, {"type": "precision", "value": 88.425}, {"type": "recall", "value": 91.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (por-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "por-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 91.5}, {"type": "f1", "value": 89.38333333333333}, {"type": "precision", "value": 88.36666666666667}, {"type": "recall", "value": 91.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 9.2}, {"type": "f1", "value": 6.672282438325198}, {"type": "precision", "value": 6.046073589145276}, {"type": "recall", "value": 9.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (oci-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "oci-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 45.2}, {"type": "f1", "value": 39.12095238095238}, {"type": "precision", "value": 36.820952380952384}, {"type": "recall", "value": 45.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pol-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pol-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 86.8}, {"type": "f1", "value": 83.35000000000001}, {"type": "precision", "value": 81.825}, {"type": "recall", "value": 86.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (war-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "war-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 13.5}, {"type": "f1", "value": 10.66862856136998}, {"type": "precision", "value": 9.845928551928552}, {"type": "recall", "value": 13.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (aze-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "aze-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 33.4}, {"type": "f1", "value": 27.78153389993659}, {"type": "precision", "value": 25.778055555555557}, {"type": "recall", "value": 33.4}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (vie-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "vie-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 57.699999999999996}, {"type": "f1", "value": 50.440714285714286}, {"type": "precision", "value": 47.64396825396825}, {"type": "recall", "value": 57.699999999999996}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (nno-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "nno-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 62.2}, {"type": "f1", "value": 56.0098625351257}, {"type": "precision", "value": 53.691914098972916}, {"type": "recall", "value": 62.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cha-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cha-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 27.00729927007299}, {"type": "f1", "value": 22.798053527980535}, {"type": "precision", "value": 21.107055961070557}, {"type": "recall", "value": 27.00729927007299}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mhr-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mhr-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 6.2}, {"type": "f1", "value": 4.295544090473964}, {"type": "precision", "value": 3.913153952193392}, {"type": "recall", "value": 6.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (dan-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "dan-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 77.10000000000001}, {"type": "f1", "value": 72.49333333333334}, {"type": "precision", "value": 70.53368637110017}, {"type": "recall", "value": 77.10000000000001}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ell-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ell-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 15.2}, {"type": "f1", "value": 10.429591693330824}, {"type": "precision", "value": 9.145801926831338}, {"type": "recall", "value": 15.2}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (amh-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "amh-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 1.7857142857142856}, {"type": "f1", "value": 0.3635204081632653}, {"type": "precision", "value": 0.205026455026455}, {"type": "recall", "value": 1.7857142857142856}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (pam-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "pam-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 6.4}, {"type": "f1", "value": 4.8412763053939525}, {"type": "precision", "value": 4.444087810337809}, {"type": "recall", "value": 6.4}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hsb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hsb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 43.47826086956522}, {"type": "f1", "value": 37.13266949291794}, {"type": "precision", "value": 34.655332590115194}, {"type": "recall", "value": 43.47826086956522}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (srp-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "srp-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 42.0}, {"type": "f1", "value": 35.412229437229435}, {"type": "precision", "value": 32.907539682539685}, {"type": "recall", "value": 42.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (epo-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "epo-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 36.0}, {"type": "f1", "value": 30.53874458874459}, {"type": "precision", "value": 28.711192408382807}, {"type": "recall", "value": 36.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kzj-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kzj-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 7.9}, {"type": "f1", "value": 5.80190114561213}, {"type": "precision", "value": 5.298527531836355}, {"type": "recall", "value": 7.9}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (awa-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "awa-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 49.35064935064935}, {"type": "f1", "value": 41.57805638325119}, {"type": "precision", "value": 38.87445887445887}, {"type": "recall", "value": 49.35064935064935}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fao-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fao-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 25.572519083969464}, {"type": "f1", "value": 21.338006776938073}, {"type": "precision", "value": 20.194474736459465}, {"type": "recall", "value": 25.572519083969464}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mal-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mal-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 79.62154294032024}, {"type": "f1", "value": 74.47355652595827}, {"type": "precision", "value": 72.2076661814653}, {"type": "recall", "value": 79.62154294032024}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ile-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ile-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 68.0}, {"type": "f1", "value": 61.80859649122807}, {"type": "precision", "value": 59.30381381381381}, {"type": "recall", "value": 68.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (bos-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "bos-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 42.93785310734463}, {"type": "f1", "value": 36.72617201306135}, {"type": "precision", "value": 34.72641059505466}, {"type": "recall", "value": 42.93785310734463}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cor-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cor-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 5.5}, {"type": "f1", "value": 3.8651658986175113}, {"type": "precision", "value": 3.4432814407814405}, {"type": "recall", "value": 5.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (cat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "cat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 69.19999999999999}, {"type": "f1", "value": 63.41880952380953}, {"type": "precision", "value": 61.07913419913419}, {"type": "recall", "value": 69.19999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (eus-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "eus-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 15.4}, {"type": "f1", "value": 11.672122577122575}, {"type": "precision", "value": 10.59919974661354}, {"type": "recall", "value": 15.4}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (yue-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "yue-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 58.5}, {"type": "f1", "value": 51.31880452880453}, {"type": "precision", "value": 48.60550125313283}, {"type": "recall", "value": 58.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (swe-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "swe-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 89.3}, {"type": "f1", "value": 86.32666666666667}, {"type": "precision", "value": 84.98333333333333}, {"type": "recall", "value": 89.3}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (dtp-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "dtp-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 5.7}, {"type": "f1", "value": 3.8739805216757546}, {"type": "precision", "value": 3.4734608954367014}, {"type": "recall", "value": 5.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kat-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kat-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 0.8042895442359249}, {"type": "f1", "value": 0.7596067917783735}, {"type": "precision", "value": 0.7372654155495978}, {"type": "recall", "value": 0.8042895442359249}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (jpn-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "jpn-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 89.7}, {"type": "f1", "value": 86.92333333333333}, {"type": "precision", "value": 85.64166666666667}, {"type": "recall", "value": 89.7}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (csb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "csb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 26.08695652173913}, {"type": "f1", "value": 20.517863778733343}, {"type": "precision", "value": 18.901098901098898}, {"type": "recall", "value": 26.08695652173913}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (xho-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "xho-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 12.676056338028168}, {"type": "f1", "value": 9.526324614352783}, {"type": "precision", "value": 9.006292657908235}, {"type": "recall", "value": 12.676056338028168}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (orv-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "orv-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 24.910179640718564}, {"type": "f1", "value": 19.645099411566473}, {"type": "precision", "value": 17.676076418591386}, {"type": "recall", "value": 24.910179640718564}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ind-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ind-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 61.4}, {"type": "f1", "value": 54.64269841269841}, {"type": "precision", "value": 51.981071428571425}, {"type": "recall", "value": 61.4}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tuk-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tuk-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 11.330049261083744}, {"type": "f1", "value": 9.610016420361248}, {"type": "precision", "value": 9.123781574258464}, {"type": "recall", "value": 11.330049261083744}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (max-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "max-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 27.816901408450708}, {"type": "f1", "value": 22.51925345174495}, {"type": "precision", "value": 21.10468365750056}, {"type": "recall", "value": 27.816901408450708}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (swh-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "swh-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 11.282051282051283}, {"type": "f1", "value": 7.777167097237831}, {"type": "precision", "value": 7.050109879436802}, {"type": "recall", "value": 11.282051282051283}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (hin-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "hin-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 86.0}, {"type": "f1", "value": 82.05857142857143}, {"type": "precision", "value": 80.25}, {"type": "recall", "value": 86.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (dsb-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "dsb-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 34.44676409185804}, {"type": "f1", "value": 28.296517215097587}, {"type": "precision", "value": 26.16624956236465}, {"type": "recall", "value": 34.44676409185804}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ber-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ber-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 7.199999999999999}, {"type": "f1", "value": 5.500051631938041}, {"type": "precision", "value": 5.164411510424442}, {"type": "recall", "value": 7.199999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tam-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tam-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 71.9869706840391}, {"type": "f1", "value": 65.79339227547696}, {"type": "precision", "value": 63.16503800217155}, {"type": "recall", "value": 71.9869706840391}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (slk-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "slk-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 70.89999999999999}, {"type": "f1", "value": 65.4152380952381}, {"type": "precision", "value": 63.106666666666655}, {"type": "recall", "value": 70.89999999999999}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tgl-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tgl-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 21.0}, {"type": "f1", "value": 17.86438197644649}, {"type": "precision", "value": 16.84469948469949}, {"type": "recall", "value": 21.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ast-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ast-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 62.20472440944882}, {"type": "f1", "value": 55.81364829396325}, {"type": "precision", "value": 53.262092238470196}, {"type": "recall", "value": 62.20472440944882}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (mkd-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "mkd-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 41.8}, {"type": "f1", "value": 34.724603174603175}, {"type": "precision", "value": 32.040277777777774}, {"type": "recall", "value": 41.8}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (khm-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "khm-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 0.41551246537396125}, {"type": "f1", "value": 0.3462603878116343}, {"type": "precision", "value": 0.32317636195752536}, {"type": "recall", "value": 0.41551246537396125}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ces-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ces-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 85.6}, {"type": "f1", "value": 81.81333333333333}, {"type": "precision", "value": 80.08333333333334}, {"type": "recall", "value": 85.6}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tzl-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tzl-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 31.73076923076923}, {"type": "f1", "value": 26.097374847374844}, {"type": "precision", "value": 24.31891025641026}, {"type": "recall", "value": 31.73076923076923}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (urd-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "urd-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 9.6}, {"type": "f1", "value": 6.598392371412457}, {"type": "precision", "value": 5.855494356434758}, {"type": "recall", "value": 9.6}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (ara-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "ara-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 83.5}, {"type": "f1", "value": 79.65190476190476}, {"type": "precision", "value": 77.875}, {"type": "recall", "value": 83.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (kor-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "kor-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 80.5}, {"type": "f1", "value": 75.75999999999999}, {"type": "precision", "value": 73.60333333333332}, {"type": "recall", "value": 80.5}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (yid-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "yid-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 2.1226415094339623}, {"type": "f1", "value": 1.4622641509433962}, {"type": "precision", "value": 1.2637578616352203}, {"type": "recall", "value": 2.1226415094339623}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (fin-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "fin-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 23.0}, {"type": "f1", "value": 18.111780719280716}, {"type": "precision", "value": 16.497738095238095}, {"type": "recall", "value": 23.0}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (tha-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "tha-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 4.562043795620438}, {"type": "f1", "value": 3.1632119907667358}, {"type": "precision", "value": 2.8806772100567724}, {"type": "recall", "value": 4.562043795620438}]}, {"task": {"type": "BitextMining"}, "dataset": {"name": "MTEB Tatoeba (wuu-eng)", "type": "mteb/tatoeba-bitext-mining", "config": "wuu-eng", "split": "test", "revision": "9080400076fbadbb4c4dcb136ff4eddc40b42553"}, "metrics": [{"type": "accuracy", "value": 75.9}, {"type": "f1", "value": 70.57690476190476}, {"type": "precision", "value": 68.19761904761904}, {"type": "recall", "value": 75.9}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 2.804}, {"type": "map_at_10", "value": 11.267000000000001}, {"type": "map_at_100", "value": 17.034}, {"type": "map_at_1000", "value": 18.733}, {"type": "map_at_3", "value": 6.071}, {"type": "map_at_5", "value": 8.187}, {"type": "mrr_at_1", "value": 34.694}, {"type": "mrr_at_10", "value": 50.504000000000005}, {"type": "mrr_at_100", "value": 51.162}, {"type": "mrr_at_1000", "value": 51.162}, {"type": "mrr_at_3", "value": 45.918}, {"type": "mrr_at_5", "value": 49.082}, {"type": "ndcg_at_1", "value": 33.672999999999995}, {"type": "ndcg_at_10", "value": 27.478}, {"type": "ndcg_at_100", "value": 37.961}, {"type": "ndcg_at_1000", "value": 50.117}, {"type": "ndcg_at_3", "value": 30.156}, {"type": "ndcg_at_5", "value": 29.293999999999997}, {"type": "precision_at_1", "value": 34.694}, {"type": "precision_at_10", "value": 24.082}, {"type": "precision_at_100", "value": 7.632999999999999}, {"type": "precision_at_1000", "value": 1.569}, {"type": "precision_at_3", "value": 30.612000000000002}, {"type": "precision_at_5", "value": 29.387999999999998}, {"type": "recall_at_1", "value": 2.804}, {"type": "recall_at_10", "value": 17.785}, {"type": "recall_at_100", "value": 47.452}, {"type": "recall_at_1000", "value": 84.687}, {"type": "recall_at_3", "value": 6.9190000000000005}, {"type": "recall_at_5", "value": 10.807}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 74.5162}, {"type": "ap", "value": 15.022137849208509}, {"type": "f1", "value": 56.77914300422838}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 59.589700056593095}, {"type": "f1", "value": 59.93893560752363}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 40.11538634360855}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 83.97806520832091}, {"type": "cos_sim_ap", "value": 67.80381341664686}, {"type": "cos_sim_f1", "value": 63.01665268958908}, {"type": "cos_sim_precision", "value": 57.713407943822695}, {"type": "cos_sim_recall", "value": 69.39313984168865}, {"type": "dot_accuracy", "value": 83.9899862907552}, {"type": "dot_ap", "value": 67.80914960711299}, {"type": "dot_f1", "value": 63.0287144048612}, {"type": "dot_precision", "value": 57.46252444058223}, {"type": "dot_recall", "value": 69.78891820580475}, {"type": "euclidean_accuracy", "value": 83.9601835846695}, {"type": "euclidean_ap", "value": 67.79862461635126}, {"type": "euclidean_f1", "value": 63.02426882389545}, {"type": "euclidean_precision", "value": 59.64664310954063}, {"type": "euclidean_recall", "value": 66.80738786279683}, {"type": "manhattan_accuracy", "value": 83.94230196101806}, {"type": "manhattan_ap", "value": 67.78560087328111}, {"type": "manhattan_f1", "value": 63.10622881851117}, {"type": "manhattan_precision", "value": 56.63939584644431}, {"type": "manhattan_recall", "value": 71.2401055408971}, {"type": "max_accuracy", "value": 83.9899862907552}, {"type": "max_ap", "value": 67.80914960711299}, {"type": "max_f1", "value": 63.10622881851117}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.04994760740482}, {"type": "cos_sim_ap", "value": 85.71231674852108}, {"type": "cos_sim_f1", "value": 78.92350867093619}, {"type": "cos_sim_precision", "value": 74.07807645549101}, {"type": "cos_sim_recall", "value": 84.44718201416693}, {"type": "dot_accuracy", "value": 89.05188807389295}, {"type": "dot_ap", "value": 85.71776365526502}, {"type": "dot_f1", "value": 78.92055922835156}, {"type": "dot_precision", "value": 74.34152317430069}, {"type": "dot_recall", "value": 84.10070834616569}, {"type": "euclidean_accuracy", "value": 89.05188807389295}, {"type": "euclidean_ap", "value": 85.7114644968015}, {"type": "euclidean_f1", "value": 78.9458525345622}, {"type": "euclidean_precision", "value": 74.14119556397078}, {"type": "euclidean_recall", "value": 84.41638435478903}, {"type": "manhattan_accuracy", "value": 89.06547133930997}, {"type": "manhattan_ap", "value": 85.70658730333459}, {"type": "manhattan_f1", "value": 78.91009741543552}, {"type": "manhattan_precision", "value": 74.00714719169308}, {"type": "manhattan_recall", "value": 84.5087773329227}, {"type": "max_accuracy", "value": 89.06547133930997}, {"type": "max_ap", "value": 85.71776365526502}, {"type": "max_f1", "value": 78.9458525345622}]}]}]} | amazon/Titan-text-embeddings-v2 | null | [
"transformers",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"fr",
"de",
"es",
"ja",
"zh",
"hi",
"ar",
"it",
"pt",
"sv",
"ko",
"he",
"cs",
"tr",
"tl",
"ru",
"nl",
"pl",
"ta",
"mr",
"ml",
"te",
"kn",
"vi",
"id",
"fa",
"hu",
"el",
"ro",
"da",
"th",
"fi",
"sk",
"uk",
"no",
"bg",
"ca",
"sr",
"hr",
"lt",
"sl",
"et",
"la",
"bn",
"lv",
"ms",
"bs",
"sq",
"az",
"gl",
"is",
"ka",
"mk",
"eu",
"hy",
"ne",
"ur",
"kk",
"mn",
"be",
"uz",
"km",
"nn",
"gu",
"my",
"cy",
"eo",
"si",
"tt",
"sw",
"af",
"ga",
"pa",
"ku",
"ky",
"tg",
"or",
"lo",
"fo",
"mt",
"so",
"lb",
"am",
"oc",
"jv",
"ha",
"ps",
"sa",
"fy",
"mg",
"as",
"ba",
"br",
"tk",
"co",
"dv",
"rw",
"ht",
"yi",
"sd",
"zu",
"gd",
"bo",
"ug",
"mi",
"rm",
"xh",
"su",
"yo",
"license:other",
"model-index",
"region:us"
] | null | 2024-04-30T12:43:01+00:00 |
text-generation | transformers | {} | itay-nakash/model_fd30467e2d | null | [
"transformers",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:44:34+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta0.5_kl0.01-epoch2 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:44:43+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-dpo-full-sft-wo-kqa_silver_wogold
This model is a fine-tuned version of [Minbyul/mistral-7b-wo-kqa_silver_wogold-sft](https://huggingface.co/Minbyul/mistral-7b-wo-kqa_silver_wogold-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0530
- Rewards/chosen: -2.4760
- Rewards/rejected: -21.0723
- Rewards/accuracies: 0.9700
- Rewards/margins: 18.5963
- Logps/rejected: -2709.2131
- Logps/chosen: -407.7003
- Logits/rejected: -2.0225
- Logits/chosen: -2.2276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.2735 | 0.32 | 100 | 0.0529 | -1.3592 | -8.1857 | 0.9700 | 6.8265 | -1420.5509 | -296.0260 | -2.7457 | -2.5375 |
| 0.1321 | 0.63 | 200 | 0.0507 | -2.0405 | -16.8511 | 0.9600 | 14.8106 | -2287.0967 | -364.1557 | -2.2518 | -2.3349 |
| 0.117 | 0.95 | 300 | 0.0531 | -2.4855 | -21.1345 | 0.9700 | 18.6490 | -2715.4331 | -408.6504 | -2.0210 | -2.2273 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/mistral-7b-wo-kqa_silver_wogold-sft", "model-index": [{"name": "mistral-7b-dpo-full-sft-wo-kqa_silver_wogold", "results": []}]} | Minbyul/mistral-7b-dpo-full-sft-wo-kqa_silver_wogold | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Minbyul/mistral-7b-wo-kqa_silver_wogold-sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:45:00+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta0.5_kl0.01-epoch3 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:48:08+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/AwanLLM/Llama-3-8B-Dolfin-v0.2-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF/resolve/main/Llama-3-8B-Dolfin-v0.2-Instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "base_model": "AwanLLM/Llama-3-8B-Dolfin-v0.2-Instruct", "quantized_by": "mradermacher"} | mradermacher/Llama-3-8B-Dolfin-v0.2-Instruct-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:AwanLLM/Llama-3-8B-Dolfin-v0.2-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:48:19+00:00 |
text-generation | transformers | {} | itay-nakash/model_42c7bd8eba | null | [
"transformers",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:49:02+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | nuvocare/adpater_nuvochat | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:49:38+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | EyaZr/eya-test | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:49:49+00:00 |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-real-boat-dataset
This model is a fine-tuned version of [zhuchi76/detr-resnet-50-finetuned-boat-dataset](https://huggingface.co/zhuchi76/detr-resnet-50-finetuned-boat-dataset) on the boat_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["boat_dataset"], "base_model": "zhuchi76/detr-resnet-50-finetuned-boat-dataset", "model-index": [{"name": "detr-resnet-50-finetuned-real-boat-dataset", "results": []}]} | SIS-2024-spring/detr-resnet-50-finetuned-real-boat-dataset | null | [
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:boat_dataset",
"base_model:zhuchi76/detr-resnet-50-finetuned-boat-dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T12:50:27+00:00 |
null | null | {} | weqweasdas/zephyr-7b-dpo-qlora | null | [
"region:us"
] | null | 2024-04-30T12:50:33+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alignment-adaptor-test04
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "HuggingFaceH4/zephyr-7b-beta", "model-index": [{"name": "alignment-adaptor-test04", "results": []}]} | Ksgk-fy/alignment-adaptor-test04 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | 2024-04-30T12:51:42+00:00 |
null | null | {} | esolteric/eso70 | null | [
"region:us"
] | null | 2024-04-30T12:52:59+00:00 |
|
null | null | {} | esolteric/eso71 | null | [
"region:us"
] | null | 2024-04-30T12:53:11+00:00 |
|
null | null | {} | esolteric/eso72 | null | [
"region:us"
] | null | 2024-04-30T12:53:20+00:00 |
|
null | null | {} | esolteric/eso73 | null | [
"region:us"
] | null | 2024-04-30T12:53:28+00:00 |
|
null | null | {} | esolteric/eso74 | null | [
"region:us"
] | null | 2024-04-30T12:53:41+00:00 |
|
null | null | {} | esolteric/eso75 | null | [
"region:us"
] | null | 2024-04-30T12:53:55+00:00 |
|
text-generation | transformers | Model Card for Model ID
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
Developed by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Model type: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Finetuned from model [optional]: [More Information Needed]
Model Sources [optional]
Repository: [More Information Needed]
Paper [optional]: [More Information Needed]
Demo [optional]: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
[More Information Needed]
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact | {"license": "apache-2.0"} | Jayant9928/orpo_v2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:55:12+00:00 |
null | null | Bản sao các mô hình chatbot Việt Nam | {"language": ["vi"]} | duyv/ChatBot-GGUF-VietNam | null | [
"gguf",
"vi",
"region:us"
] | null | 2024-04-30T12:55:47+00:00 |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mariofm02/T5small_Business_News | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:56:17+00:00 |
text-generation | transformers | {} | robzchhangte/8-MizGPT-v4 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T12:59:07+00:00 |
|
null | null | {} | mozksoft/sweetMix-v22Flat-coreml-q6 | null | [
"region:us"
] | null | 2024-04-30T13:00:08+00:00 |
|
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mariofm02/T5small_Politics_News | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:00:13+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta1.1_kl0.01-epoch2 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:00:52+00:00 |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | IDPZEro/dummy-model | null | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:02:20+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tulu2-13b-cost-UF-5e-7-nojudge
This model is a fine-tuned version of [allenai/tulu-2-13b](https://huggingface.co/allenai/tulu-2-13b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Rewards/chosen: 0.0268
- Rewards/rejected: 0.0260
- Rewards/accuracies: 0.5450
- Rewards/margins: 0.0008
- Rewards/margins Max: 0.0629
- Rewards/margins Min: -0.0642
- Rewards/margins Std: 0.0421
- Logps/rejected: -327.6042
- Logps/chosen: -331.2294
- Logits/rejected: -0.8979
- Logits/chosen: -1.0239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6681 | 1.0 | 1245 | 0.6931 | 0.0268 | 0.0260 | 0.5450 | 0.0008 | 0.0629 | -0.0642 | 0.0421 | -327.6042 | -331.2294 | -0.8979 | -1.0239 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "allenai/tulu-2-13b", "model-index": [{"name": "tulu2-13b-cost-UF-5e-7-nojudge", "results": []}]} | just1nseo/tulu2-13b-cost-UF-5e-7-nojudge | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:allenai/tulu-2-13b",
"region:us"
] | null | 2024-04-30T13:02:30+00:00 |
null | null | {} | ricardomd/busqueda | null | [
"region:us"
] | null | 2024-04-30T13:02:45+00:00 |
|
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mariofm02/T5small_Entertainment_News | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:02:54+00:00 |
text-generation | transformers | {} | israel/zephyr-7b-gemma-sft-african-ultrachat-2000k | null | [
"transformers",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:02:57+00:00 |
|
text-generation | transformers | {} | baesad/llama-2-7b-fine-tune | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:02:57+00:00 |
|
text-to-audio | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fil_b64_le5_s4000
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:--------:|:----:|:---------------:|
| 0.575 | 22.2222 | 500 | 0.4967 |
| 0.4945 | 44.4444 | 1000 | 0.4460 |
| 0.4681 | 66.6667 | 1500 | 0.4301 |
| 0.4514 | 88.8889 | 2000 | 0.4194 |
| 0.4396 | 111.1111 | 2500 | 0.4129 |
| 0.432 | 133.3333 | 3000 | 0.4124 |
| 0.43 | 155.5556 | 3500 | 0.4104 |
| 0.4317 | 177.7778 | 4000 | 0.4125 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "fil_b64_le5_s4000", "results": []}]} | mikhail-panzo/fil_b64_le5_s4000 | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:02:58+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abc88767/model18 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:04:08+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta1.1_kl0.01-epoch3 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:04:24+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tulu2-13b-cost-UI-5e-7-nojudge
This model is a fine-tuned version of [allenai/tulu-2-13b](https://huggingface.co/allenai/tulu-2-13b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6912
- Rewards/chosen: -0.0076
- Rewards/rejected: -0.0119
- Rewards/accuracies: 0.5960
- Rewards/margins: 0.0043
- Rewards/margins Max: 0.0285
- Rewards/margins Min: -0.0168
- Rewards/margins Std: 0.0151
- Logps/rejected: -331.3923
- Logps/chosen: -334.6692
- Logits/rejected: -0.8885
- Logits/chosen: -1.0144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6731 | 1.0 | 1185 | 0.6912 | -0.0076 | -0.0119 | 0.5960 | 0.0043 | 0.0285 | -0.0168 | 0.0151 | -331.3923 | -334.6692 | -0.8885 | -1.0144 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "allenai/tulu-2-13b", "model-index": [{"name": "tulu2-13b-cost-UI-5e-7-nojudge", "results": []}]} | just1nseo/tulu2-13b-cost-UI-5e-7-nojudge | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:allenai/tulu-2-13b",
"region:us"
] | null | 2024-04-30T13:04:54+00:00 |
text-generation | transformers | # Untitled Model (1)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [EleutherAI/llemma_7b](https://huggingface.co/EleutherAI/llemma_7b)
* [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: codellama/CodeLlama-7b-hf
- model: EleutherAI/llemma_7b
merge_method: slerp
base_model: codellama/CodeLlama-7b-hf
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["EleutherAI/llemma_7b", "codellama/CodeLlama-7b-hf"]} | JyoP/merged_llemma_code_llama_slerp | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:EleutherAI/llemma_7b",
"base_model:codellama/CodeLlama-7b-hf",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:04:55+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2", "quantized_by": "mradermacher"} | mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF | null | [
"transformers",
"gguf",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:generator",
"base_model:yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:05:21+00:00 |
null | null | {} | weqweasdas/zephyr-7b-sft-full | null | [
"region:us"
] | null | 2024-04-30T13:05:53+00:00 |
|
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mariofm02/T5small_Sport_News | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:06:15+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0643
- Precision: 0.9384
- Recall: 0.9510
- F1: 0.9447
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0756 | 1.0 | 1756 | 0.0674 | 0.9094 | 0.9357 | 0.9224 | 0.9815 |
| 0.0367 | 2.0 | 3512 | 0.0666 | 0.9372 | 0.9487 | 0.9429 | 0.9855 |
| 0.0223 | 3.0 | 5268 | 0.0643 | 0.9384 | 0.9510 | 0.9447 | 0.9860 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-finetuned-ner", "results": []}]} | dcram/bert-finetuned-ner | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:06:41+00:00 |
text-generation | transformers | {} | itay-nakash/model_a9d3237cc1 | null | [
"transformers",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:06:58+00:00 |
|
text-to-audio | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fil_b128_le4_s4000
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:--------:|:----:|:---------------:|
| 0.4635 | 44.4444 | 500 | 0.4207 |
| 0.4317 | 88.8889 | 1000 | 0.4081 |
| 0.412 | 133.3333 | 1500 | 0.4051 |
| 0.395 | 177.7778 | 2000 | 0.4049 |
| 0.3848 | 222.2222 | 2500 | 0.4063 |
| 0.3738 | 266.6667 | 3000 | 0.4063 |
| 0.3618 | 311.1111 | 3500 | 0.4072 |
| 0.357 | 355.5556 | 4000 | 0.4081 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "fil_b128_le4_s4000", "results": []}]} | mikhail-panzo/fil_b128_le4_s4000 | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:07:12+00:00 |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mariofm02/T5small_Tech_News | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:07:58+00:00 |
text2text-generation | transformers | {} | DinoDelija/nllb_english_german_fering_v2 | null | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:08:08+00:00 |
|
null | null | {} | MohametSena/ddpm-butterflies | null | [
"region:us"
] | null | 2024-04-30T13:08:20+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NDD-dimeshift_test-content
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5833
- Accuracy: 0.8875
- F1: 0.8913
- Precision: 0.8954
- Recall: 0.8875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1131 | 0.9989 | 669 | 0.5635 | 0.8758 | 0.8800 | 0.8845 | 0.8758 |
| 0.0553 | 1.9978 | 1338 | 0.5833 | 0.8875 | 0.8913 | 0.8954 | 0.8875 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "NDD-dimeshift_test-content", "results": []}]} | lgk03/NDD-dimeshift_test-content | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:08:22+00:00 |
null | nemo |
<h1 align="center"> nach0 </h1>
<h3 align="center"> Multimodal Natural and Chemical Languages Foundation Model </h3>
<p align="center">
📃 <a href="https://arxiv.org/abs/2311.12410" target="_blank">Paper</a> • ⏬ <a href="https://huggingface.co/insilicomedicine/nach0_base" target="_blank">Base nach0</a> • ⏬ <a href="https://huggingface.co/insilicomedicine/nach0_large" target="_blank">Large nach0</a> <br>
</p>
<div align=center><img src="images/nach0_Pub_2.png" width="70%" height="70%" /></div>
<h2 id="1">Overview</h2>
- nach0 is a multi-domain and multi-task encoder-decoder LLM pre-trained on unlabeled text from scientific literature, patents, and molecule strings to incorporate a range of chemical and linguistic knowledge.
- We employed instruction tuning, where specific task-related instructions are utilized to fine-tune nach0 for the final set of tasks. To train nach0 effectively, we leverage the NeMo framework, enabling efficient parallel optimization of both base and large model versions.
- Extensive experiments demonstrate that our model outperforms state-of-the-art baselines on single-domain and cross-domain tasks. Furthermore, it can generate high-quality outputs in molecular and textual formats, showcasing its effectiveness in multi-domain setups.
<h2 id="1">Tasks</h2>
Datasets used for training and evaluation. Colour represents the type of tasks. Yellow and blue datasets are single-domain, typically requiring regression/classification losses or generation in the target domain (natural language or SMILES strings). Gradients from yellow to blue represent cross-domain generation tasks that require natural language input and SMILES output, or vise versa.
<div align=center><img src="images/nach0_Pub_1.png" width="70%" height="70%" /></div>
<h2> Model Usage Guide</h2>
To use model for the inference follow the steps bellow:
1. Preprocess the input by replacing the atom tokens with special tokens.
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import re
from rdkit.Chem import MolFromSmiles
import string
from rdkit import RDLogger
RDLogger.DisableLog('rdApp.*')
atoms_tokens = ['Ag','Al','As','Au','B','Ba','Bi','Br','C','Ca',
'Cd','Cl','Co','Cr','Cs','Cu','F','Fe','Ga','Gd',
'Ge','H','Hg','I','In','K','Li','M','Mg','Mn',
'Mo','N','Na','O','P','Pt','Ru','S','Sb','Sc',
'Se','Si','Sn','V','W','Z','Zn','c','e','n','o','p','s']
atoms_tokens = sorted(atoms_tokens, key=lambda s: len(s), reverse=True)
SMI_REGEX_PATTERN = r"(\[|\]|\(|\)|\.|=|#|-|\+|\\|\/|:|~|@|\?|>>?|\*|\$|\%[0-9]{2}|[0-9]|" + \
'|'.join(atoms_tokens) + ")"
regex = re.compile(SMI_REGEX_PATTERN)
def clean_output_sequence(output_sequence):
return output_sequence.replace('</s>', '').replace('<sm_', '').replace(' sm_', '').replace('>', '').strip()
def add_special_symbols(text):
output = []
for word in text.split():
tokens = [token for token in regex.findall(word)]
if len(tokens) > 4 and (word == ''.join(tokens)) and MolFromSmiles(word):
output.append(''.join(['<sm_'+t+'>' for t in tokens]))
else:
output.append(word)
return ' '.join(output)
PROMPT = """Given the following reactants and reagents, please provide a possible product.
CCN(CC)CC.CCN=C=NCCCN(C)C.CN(C)C=O.Cl.NC1=CC=C(Cl)C=C1N.O.O=C(O)CCCCCNC(=O)C=C1C2=CC=CC=C2C2=CC=CC=C12.OC1=CC=CC2=C1N=NN2.[Cl-].[Na+]"""
PROMPT = add_special_symbols(PROMPT)
```
2. Load the model checkoint
```python
model = AutoModelForSeq2SeqLM.from_pretrained('insilicomedicine/nach0_base')
tokenizer = AutoTokenizer.from_pretrained('insilicomedicine/nach0_base')
```
3. Generate response to prompt and replace special tokens with corresponding atom tokens
```python
input_text_ids = tokenizer(PROMPT, padding="longest", max_length=512, truncation=True, return_tensors="pt")
generated_text_ids = model.generate(**input_text_ids, do_sample=True, top_k=100, top_p=0.95, max_length=512)
generated_text = tokenizer.batch_decode(generated_text_ids, skip_special_tokens=True)[0]
generated_text = clean_output_sequence(generated_text)
```
```python
# NC1=CC=C(Cl)C=C1NC(=O)CCCCCNC(=O)C=C1C2=CC=CC=C2C2=CC=CC=C12
```
<h3> References</h3>
If you use our repository, please cite the following related paper:
```
@article{nach0,
title={nach0: Multimodal Natural and Chemical Languages Foundation Model},
author={Micha Livne and Zulfat Miftahutdinov and Elena Tutubalina and Maksim Kuznetsov and Daniil Polykovskiy and Annika Brundyn and Aastha Jhunjhunwala and Anthony Costa and Alex Aliper and Alán Aspuru-Guzik and Alex Zhavoronkov},
year={2024},
journal={Chem. Sci.},
pages={-},
publisher={The Royal Society of Chemistry},
}
```
| {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["chemistry"]} | insilicomedicine/nach0_large | null | [
"nemo",
"chemistry",
"en",
"arxiv:2311.12410",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-30T13:10:49+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2094
- Accuracy: 0.9350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3895 | 1.0 | 370 | 0.2819 | 0.9432 |
| 0.225 | 2.0 | 740 | 0.2152 | 0.9472 |
| 0.1687 | 3.0 | 1110 | 0.1938 | 0.9499 |
| 0.1392 | 4.0 | 1480 | 0.1860 | 0.9526 |
| 0.1255 | 5.0 | 1850 | 0.1814 | 0.9553 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224", "model-index": [{"name": "vit-base-oxford-iiit-pets", "results": []}]} | tedbelford/vit-base-oxford-iiit-pets | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:11:03+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tropianhs/mistral-tweet-finetune-tropianhs | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:11:38+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_1_1_ext_slavicbert
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2572
- Precision: 0.8607
- Recall: 0.8915
- F1: 0.8758
- Accuracy: 0.9627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3946 | 1.72 | 500 | 0.1925 | 0.7835 | 0.8471 | 0.8141 | 0.9467 |
| 0.1653 | 3.44 | 1000 | 0.1627 | 0.8340 | 0.8675 | 0.8504 | 0.9572 |
| 0.1183 | 5.15 | 1500 | 0.1700 | 0.8378 | 0.8808 | 0.8588 | 0.9595 |
| 0.0869 | 6.87 | 2000 | 0.1901 | 0.8554 | 0.8728 | 0.8640 | 0.9589 |
| 0.0661 | 8.59 | 2500 | 0.2037 | 0.8482 | 0.8867 | 0.8670 | 0.9595 |
| 0.053 | 10.31 | 3000 | 0.2011 | 0.8460 | 0.8867 | 0.8659 | 0.9609 |
| 0.043 | 12.03 | 3500 | 0.2216 | 0.8555 | 0.8888 | 0.8718 | 0.9593 |
| 0.0358 | 13.75 | 4000 | 0.2245 | 0.8492 | 0.8878 | 0.8680 | 0.9603 |
| 0.0296 | 15.46 | 4500 | 0.2401 | 0.8513 | 0.8872 | 0.8689 | 0.9603 |
| 0.0264 | 17.18 | 5000 | 0.2415 | 0.8564 | 0.8862 | 0.8710 | 0.9610 |
| 0.0212 | 18.9 | 5500 | 0.2570 | 0.8557 | 0.8872 | 0.8712 | 0.9622 |
| 0.0205 | 20.62 | 6000 | 0.2540 | 0.8567 | 0.8883 | 0.8722 | 0.9616 |
| 0.0167 | 22.34 | 6500 | 0.2573 | 0.8568 | 0.8894 | 0.8728 | 0.9614 |
| 0.0161 | 24.05 | 7000 | 0.2572 | 0.8607 | 0.8915 | 0.8758 | 0.9627 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "DeepPavlov/bert-base-bg-cs-pl-ru-cased", "model-index": [{"name": "CNEC_1_1_ext_slavicbert", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8606811145510835, "name": "Precision"}, {"type": "recall", "value": 0.8915018706574025, "name": "Recall"}, {"type": "f1", "value": 0.8758204253084799, "name": "F1"}, {"type": "accuracy", "value": 0.9626885008032336, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_1_1_ext_slavicbert | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:12:56+00:00 |
text-generation | transformers |
# TyphoonTime-passthrough
TyphoonTime-passthrough is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b)
* [chargoddard/storytime-13b](https://huggingface.co/chargoddard/storytime-13b)
## 🧩 Configuration
\```yaml
slices:
- sources:
- model: scb10x/typhoon-7b
layer_range: [0, 32]
- sources:
- model: chargoddard/storytime-13b
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
\``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "scb10x/typhoon-7b", "chargoddard/storytime-13b"]} | Manichik/TyphoonTime-passthrough | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"scb10x/typhoon-7b",
"chargoddard/storytime-13b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:13:33+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: 32 -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Osru/llama-2-7b-nubidoc
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-nubidoc-GGUF/resolve/main/llama-2-7b-nubidoc.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "base_model": "Osru/llama-2-7b-nubidoc", "quantized_by": "mradermacher"} | mradermacher/llama-2-7b-nubidoc-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:Osru/llama-2-7b-nubidoc",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:13:44+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jackkira/commentgpt-ft | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:14:02+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shawgpt-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.4134 | 0.9231 | 3 | 3.8260 |
| 3.9134 | 1.8462 | 6 | 3.3301 |
| 3.3628 | 2.7692 | 9 | 2.9029 |
| 2.2019 | 4.0 | 13 | 2.5051 |
| 2.6157 | 4.9231 | 16 | 2.2635 |
| 2.2945 | 5.8462 | 19 | 2.0651 |
| 2.0626 | 6.7692 | 22 | 1.9082 |
| 1.4488 | 8.0 | 26 | 1.8209 |
| 1.879 | 8.9231 | 29 | 1.7939 |
| 1.3095 | 9.2308 | 30 | 1.7856 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "shawgpt-ft", "results": []}]} | jackkira/shawgpt-ft | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T13:14:04+00:00 |
null | diffusers | {} | motionsomething/magicfixup | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-04-30T13:14:09+00:00 |
|
null | null |
# int2eh/deepseek-coder-33b-instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`deepseek-ai/deepseek-coder-33b-instruct`](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo int2eh/deepseek-coder-33b-instruct-Q5_K_S-GGUF --model deepseek-coder-33b-instruct.Q5_K_S.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo int2eh/deepseek-coder-33b-instruct-Q5_K_S-GGUF --model deepseek-coder-33b-instruct.Q5_K_S.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m deepseek-coder-33b-instruct.Q5_K_S.gguf -n 128
```
| {"license": "other", "tags": ["llama-cpp", "gguf-my-repo"], "license_name": "deepseek", "license_link": "LICENSE"} | int2eh/deepseek-coder-33b-instruct-Q5_K_S-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:other",
"region:us"
] | null | 2024-04-30T13:15:11+00:00 |
reinforcement-learning | stable-baselines3 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AhmedTarek -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AhmedTarek -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AhmedTarek
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.3),
('learning_starts', 100000),
('n_timesteps', 100000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| {"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "5.00 +/- 7.07", "name": "mean_reward", "verified": false}]}]}]} | AhmedTarek/dqn-SpaceInvadersNoFrameskip-v4 | null | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-30T13:15:53+00:00 |
text-generation | transformers | {} | itay-nakash/model_bb62fa2388 | null | [
"transformers",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:17:09+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_harmlessharmless_gpt4_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "dpo_harmlessharmless_gpt4_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05", "results": []}]} | Holarissun/dpo_harmlessharmless_gpt4_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"region:us"
] | null | 2024-04-30T13:17:48+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_harmlessharmless_gpt4_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "dpo_harmlessharmless_gpt4_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06", "results": []}]} | Holarissun/dpo_harmlessharmless_gpt4_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"region:us"
] | null | 2024-04-30T13:19:42+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | MujtabaAhmed/lora_model | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:19:44+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | somnathsingh31/llava-1.5-7b-hf-ft-merged_model | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-04-30T13:19:51+00:00 |
object-detection | transformers | {} | qubvel-hf/detr-resnet-50-finetuned-10k-cppe5-no-trainer-v2 | null | [
"transformers",
"safetensors",
"detr",
"object-detection",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:20:18+00:00 |
|
text-generation | transformers |
# Llama-3-portuguese-Tom-cat-8b-instruct
<p align="center">
<img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/tom-cat-8b.webp" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
This model was trained with a superset of 300,000 chat in Portuguese.
The model comes to help fill the gap in models in Portuguese. Tuned from the Llama3 8B, the model was adjusted mainly for chat.
# How to use
### FULL MODEL : A100
### HALF MODEL: L4
### 8bit or 4bit : T4 or V100
You can use the model in its normal form up to 4-bit quantization. Below we will use both approaches.
Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response.
Important points like these help models (even smaller models like 8b) to perform much better.
```python
!pip install -q -U transformers
!pip install -q -U accelerate
!pip install -q -U bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct", device_map= {"": 0})
tokenizer = AutoTokenizer.from_pretrained("rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct")
model.eval()
```
You can use with Pipeline.
```python
from transformers import pipeline
pipe = pipeline("text-generation",
model=model,
tokenizer=tokenizer,
do_sample=True,
max_new_tokens=512,
num_beams=2,
temperature=0.3,
top_k=50,
top_p=0.95,
early_stopping=True,
pad_token_id=tokenizer.eos_token_id,
)
def format_prompt(question:str):
system_prompt = "Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido."
return f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{ system_prompt }<|eot_id|><|start_header_id|>user<|end_header_id|>
{ question }<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
prompt = format_prompt("Me fale sobra a OAB, Ordem dos Advogados do Brasil")
result = pipe(prompt)
result[0]["generated_text"].split("assistant<|end_header_id|>")[1]
#A Ordem dos Advogados do Brasil (OAB) é a entidade responsável por regulamentar e fiscalizar a profissão de advogado no Brasil.
#Foi criada em 1930, com o objetivo de proteger os direitos e interesses dos advogados e da sociedade, garantindo a defesa dos direitos e garantias fundamentais.
#A OAB é uma entidade de direito público, com personalidade jurídica própria, e é composta por advogados e advogadas que atuam em todo o território nacional.
#A entidade é dirigida por um Conselho Federal, que é o órgão máximo da OAB, e é composto por 32 membros, eleitos por votação direta dos advogados e advogadas.
#A OAB tem como principais atribuições:. Regulamentar a profissão de advogado: a OAB estabelece as normas e regulamentações para a formação, habilitação e exercício
#a profissão de advogado no Brasil. Fiscalizar a atividade dos advogados: a OAB fiscaliza a atividade dos advogados, verificando se eles atendem às normas e
#regulamentações estabelecidas.. Defender os direitos e interesses dos advogados: a OAB defende os direitos e interesses dos advogados, garantindo que eles
#possam exercer sua profissão com liberdade e segurança.\n4. Representar a sociedade: a OAB representa a sociedade, defendendo os direitos e interesses dos
#dadãos e das instituições.A OAB também é responsável por: Certificar a formação e habilitação dos advogados; Fiscalizar a atividade dos advogados, verificando
#se eles atendem às normas e regulamentações estabelecidas;\n* Defender os direitos e interesses dos advogados, garantindo que eles possam exercer sua profissão
#com liberdade e segurança; Representar a sociedade, defendendo os direitos e interesses dos cidadãos e das instituições.Além disso, a OAB também é responsável por:
#Publicar o Diário da Ordem dos Advogados do Brasil
```
```python
prompt = format_prompt("Haja como um professor de matemática e me explique porque 2 + 2 = 4")
result = pipe(prompt)
result[0]["generated_text"].split("assistant<|end_header_id|>")[1]
#Meu aluno, é um prazer explicar um conceito fundamental da matemática! equação 2 + 2 = 4 pode parecer simples, mas é
#importante entender por quê ela é verdadeira. Em matemática, a adição é uma operação que combina dois números para obter um resultado.
#Nesse caso, estamos adicionando 2 e 2. Para entender melhor, vamos começar com um exemplo concreto. Imagine que você tem 2 maçãs em uma mesa e seu amigo também tem
#2 maçãs em outra mesa. Se você e seu amigo combinam suas maçãs, você terá agora 4 maçãs.Agora, vamos usar símbolos para representar essa situação.
#Podemos escrever 2 + 2 =?, e o resultado é 4. Isso significa que quando você soma 2 e 2, o resultado é 4. Mas por quê isso é verdade? Isso é porque a adição é
#uma operação que segue certas regras. Uma dessas regras é que a ordem em que você soma os números não muda o resultado. Isso significa que 2 + 2 é igual a 2 + 2,
#independentemente da ordem em que você soma.Além disso, a adição também é uma operação que é associativa, o que significa que você pode adicionar números em
#qualquer ordem e o resultado será o mesmo. Por exemplo, (2 + 2) + 0 = 2 + (2 + 0). Isso é verdade porque a adição é uma operação que é idempotente, o que significa
#que adicionando 0 a um número não muda o resultado.Portanto, quando você soma 2 e 2, o resultado é 4 porque a adição segue essas regras. A equação 2 + 2 = 4
#é verdadeira porque é uma consequência lógica da definição da adição e das regras que a governam.Espero que isso tenha ajudado a entender melhor por quê 2 + 2 = 4,
#meu aluno!
```
If you are having a memory problem such as "CUDA Out of memory", you should use 4-bit or 8-bit quantization.
For the complete model in colab you will need the A100.
If you want to use 4bits or 8bits, T4 or L4 will already solve the problem.
# 4bits example
```python
from transformers import BitsAndBytesConfig
import torch
nb_4bit_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True
)
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=bnb_config,
device_map={"": 0}
)
```
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|---------|
|Average |**70.57**|
|ENEM Challenge (No Images)| 70.40|
|BLUEX (No Images) | 58|
|OAB Exams | 51.07|
|Assin2 RTE | 90.91|
|Assin2 STS | 75.40|
|FaQuAD NLI | 76.05|
|HateBR Binary | 86.99|
|PT Hate Speech Binary | 60.39|
|tweetSentBR | 65.92|
### Comments
Any idea, help or report will always be welcome.
email: [email protected]
<div style="display:flex; flex-direction:row; justify-content:left">
<a href="https://www.linkedin.com/in/heleno-betini-2b3016175/" target="_blank">
<img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white">
</a>
<a href="https://github.com/rhaymisonbetini" target="_blank">
<img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
</a> | {"language": ["pt"], "license": "apache-2.0", "library_name": "transformers", "tags": ["portugues", "portuguese", "QA", "instruct"], "datasets": ["rhaymison/superset"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "pipeline_tag": "text-generation", "model-index": [{"name": "Llama-3-portuguese-Tom-cat-8b-instruct", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "ENEM Challenge (No Images)", "type": "eduagarcia/enem_challenge", "split": "train", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc", "value": 70.4, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "BLUEX (No Images)", "type": "eduagarcia-temp/BLUEX_without_images", "split": "train", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc", "value": 58.0, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "OAB Exams", "type": "eduagarcia/oab_exams", "split": "train", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc", "value": 51.07, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Assin2 RTE", "type": "assin2", "split": "test", "args": {"num_few_shot": 15}}, "metrics": [{"type": "f1_macro", "value": 90.91, "name": "f1-macro"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Assin2 STS", "type": "eduagarcia/portuguese_benchmark", "split": "test", "args": {"num_few_shot": 15}}, "metrics": [{"type": "pearson", "value": 75.4, "name": "pearson"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "FaQuAD NLI", "type": "ruanchaves/faquad-nli", "split": "test", "args": {"num_few_shot": 15}}, "metrics": [{"type": "f1_macro", "value": 76.05, "name": "f1-macro"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HateBR Binary", "type": "ruanchaves/hatebr", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "f1_macro", "value": 86.99, "name": "f1-macro"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "PT Hate Speech Binary", "type": "hate_speech_portuguese", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "f1_macro", "value": 60.39, "name": "f1-macro"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct", "name": "Open Portuguese LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "tweetSentBR", "type": "eduagarcia/tweetsentbr_fewshot", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "f1_macro", "value": 65.92, "name": "f1-macro"}], "source": {"url": "https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct", "name": "Open Portuguese LLM Leaderboard"}}]}]} | rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"portugues",
"portuguese",
"QA",
"instruct",
"conversational",
"pt",
"dataset:rhaymison/superset",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:22:22+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta2.2_kl0.01-epoch2 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:23:11+00:00 |
null | null | {} | gatoch/april30-instruct-pix2pix | null | [
"region:us"
] | null | 2024-04-30T13:24:15+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-full
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5590
- Rewards/chosen: -0.7818
- Rewards/rejected: -2.7115
- Rewards/accuracies: 0.7857
- Rewards/margins: 1.9297
- Logps/rejected: -287.3273
- Logps/chosen: -289.7805
- Logits/rejected: -2.4561
- Logits/chosen: -2.5007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6075 | 0.1 | 100 | 0.5945 | 0.3241 | -0.1206 | 0.7163 | 0.4447 | -261.4175 | -278.7209 | -2.6324 | -2.6651 |
| 0.5341 | 0.21 | 200 | 0.5471 | -0.0734 | -1.0103 | 0.7639 | 0.9369 | -270.3152 | -282.6963 | -2.5394 | -2.5779 |
| 0.5315 | 0.31 | 300 | 0.5258 | 0.1435 | -0.9757 | 0.7619 | 1.1192 | -269.9694 | -280.5274 | -2.5337 | -2.5711 |
| 0.4978 | 0.42 | 400 | 0.5366 | -0.2177 | -1.2826 | 0.7579 | 1.0649 | -273.0383 | -284.1391 | -2.5667 | -2.6011 |
| 0.5134 | 0.52 | 500 | 0.5340 | -0.4713 | -1.5140 | 0.7460 | 1.0427 | -275.3516 | -286.6748 | -2.4488 | -2.4836 |
| 0.5404 | 0.63 | 600 | 0.5188 | -0.0534 | -1.2981 | 0.7480 | 1.2447 | -273.1928 | -282.4962 | -2.3631 | -2.4039 |
| 0.5256 | 0.73 | 700 | 0.5270 | -0.2533 | -1.5704 | 0.7639 | 1.3172 | -275.9163 | -284.4948 | -2.3224 | -2.3640 |
| 0.4991 | 0.84 | 800 | 0.5278 | -0.2394 | -1.5276 | 0.7639 | 1.2882 | -275.4879 | -284.3556 | -2.3730 | -2.4144 |
| 0.5084 | 0.94 | 900 | 0.5457 | 0.2664 | -0.9546 | 0.7619 | 1.2210 | -269.7581 | -279.2981 | -2.4875 | -2.5254 |
| 0.1011 | 1.05 | 1000 | 0.5361 | -0.5236 | -2.1364 | 0.7877 | 1.6129 | -281.5762 | -287.1976 | -2.4389 | -2.4774 |
| 0.0942 | 1.15 | 1100 | 0.5454 | -0.4356 | -2.2047 | 0.7897 | 1.7691 | -282.2592 | -286.3182 | -2.4515 | -2.4926 |
| 0.0817 | 1.26 | 1200 | 0.5530 | -0.7588 | -2.5855 | 0.7857 | 1.8268 | -286.0674 | -289.5495 | -2.4441 | -2.4863 |
| 0.0697 | 1.36 | 1300 | 0.5549 | -0.5919 | -2.4690 | 0.7798 | 1.8771 | -284.9021 | -287.8810 | -2.4474 | -2.4910 |
| 0.0842 | 1.47 | 1400 | 0.5575 | -0.7425 | -2.6443 | 0.7917 | 1.9018 | -286.6550 | -289.3871 | -2.4669 | -2.5100 |
| 0.075 | 1.57 | 1500 | 0.5590 | -0.5382 | -2.4532 | 0.7956 | 1.9150 | -284.7438 | -287.3436 | -2.4699 | -2.5133 |
| 0.098 | 1.67 | 1600 | 0.5583 | -0.7761 | -2.6741 | 0.7877 | 1.8980 | -286.9528 | -289.7227 | -2.4652 | -2.5092 |
| 0.0718 | 1.78 | 1700 | 0.5593 | -0.7532 | -2.6704 | 0.7877 | 1.9172 | -286.9160 | -289.4940 | -2.4592 | -2.5036 |
| 0.0828 | 1.88 | 1800 | 0.5606 | -0.7985 | -2.7306 | 0.7897 | 1.9321 | -287.5178 | -289.9467 | -2.4560 | -2.5007 |
| 0.103 | 1.99 | 1900 | 0.5601 | -0.7805 | -2.7113 | 0.7857 | 1.9309 | -287.3255 | -289.7666 | -2.4554 | -2.5002 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "zephyr-7b-dpo-full", "results": []}]} | weqweasdas/zephyr-7b-dpo-full | null | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:24:16+00:00 |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.40.1
- Pytorch 1.13.1+cu116
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["swag"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-finetuned-swag", "results": []}]} | jarminraws/bert-base-uncased-finetuned-swag | null | [
"transformers",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"base_model:bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:24:59+00:00 |
text2text-generation | transformers | {} | alexbeta80/pix2struct_polizze_2 | null | [
"transformers",
"pytorch",
"pix2struct",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:25:17+00:00 |
|
null | null | {} | dana2002/last-one | null | [
"region:us"
] | null | 2024-04-30T13:25:18+00:00 |
|
feature-extraction | transformers | # fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564
## Model Description
fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564 is a fine-tuned version of jinaai/jina-embeddings-v2-small-en designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564).
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
| {} | fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564 | null | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"custom_code",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:25:39+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0430HMA17
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2942 | 0.09 | 10 | 0.1808 |
| 0.1615 | 0.18 | 20 | 0.1583 |
| 0.1524 | 0.27 | 30 | 0.1564 |
| 0.1564 | 0.36 | 40 | 0.1529 |
| 0.1528 | 0.45 | 50 | 0.1525 |
| 0.1533 | 0.54 | 60 | 0.1504 |
| 0.1528 | 0.63 | 70 | 0.1483 |
| 0.147 | 0.73 | 80 | 0.1365 |
| 0.4162 | 0.82 | 90 | 0.2004 |
| 0.3136 | 0.91 | 100 | 0.0837 |
| 0.15 | 1.0 | 110 | 0.0849 |
| 0.0947 | 1.09 | 120 | 0.0721 |
| 0.1072 | 1.18 | 130 | 0.3448 |
| 0.0929 | 1.27 | 140 | 0.0710 |
| 0.7574 | 1.36 | 150 | 0.4213 |
| 0.1423 | 1.45 | 160 | 0.0615 |
| 0.0548 | 1.54 | 170 | 0.0528 |
| 0.0641 | 1.63 | 180 | 0.0572 |
| 0.0594 | 1.72 | 190 | 0.0471 |
| 0.0438 | 1.81 | 200 | 0.0419 |
| 0.0362 | 1.9 | 210 | 0.0342 |
| 0.0272 | 1.99 | 220 | 0.0235 |
| 0.0372 | 2.08 | 230 | 0.0306 |
| 0.0254 | 2.18 | 240 | 0.0238 |
| 0.0194 | 2.27 | 250 | 0.0227 |
| 0.0253 | 2.36 | 260 | 0.0218 |
| 0.0255 | 2.45 | 270 | 0.0208 |
| 0.0171 | 2.54 | 280 | 0.0208 |
| 0.0246 | 2.63 | 290 | 0.0204 |
| 0.0215 | 2.72 | 300 | 0.0197 |
| 0.019 | 2.81 | 310 | 0.0195 |
| 0.0205 | 2.9 | 320 | 0.0188 |
| 0.021 | 2.99 | 330 | 0.0188 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0430HMA17", "results": []}]} | Litzy619/O0430HMA17 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T13:25:46+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0430HMA18
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3037 | 0.09 | 10 | 0.1850 |
| 0.1607 | 0.18 | 20 | 0.1587 |
| 0.151 | 0.27 | 30 | 0.1564 |
| 0.1546 | 0.36 | 40 | 0.1521 |
| 0.1523 | 0.45 | 50 | 0.1509 |
| 0.1561 | 0.54 | 60 | 0.1489 |
| 0.1516 | 0.63 | 70 | 0.1490 |
| 0.1506 | 0.73 | 80 | 0.1549 |
| 0.1465 | 0.82 | 90 | 0.1491 |
| 0.1473 | 0.91 | 100 | 0.1499 |
| 0.1483 | 1.0 | 110 | 0.1459 |
| 0.1178 | 1.09 | 120 | 0.0927 |
| 0.3145 | 1.18 | 130 | 0.1129 |
| 0.361 | 1.27 | 140 | 0.0686 |
| 0.0834 | 1.36 | 150 | 0.0706 |
| 0.0597 | 1.45 | 160 | 0.0545 |
| 0.0553 | 1.54 | 170 | 0.0613 |
| 0.0607 | 1.63 | 180 | 0.0521 |
| 0.0629 | 1.72 | 190 | 0.0501 |
| 0.0458 | 1.81 | 200 | 0.0351 |
| 0.0544 | 1.9 | 210 | 0.0925 |
| 0.0574 | 1.99 | 220 | 0.0583 |
| 0.0487 | 2.08 | 230 | 0.0434 |
| 0.0349 | 2.18 | 240 | 0.0310 |
| 0.0245 | 2.27 | 250 | 0.0252 |
| 0.0236 | 2.36 | 260 | 0.0197 |
| 0.0221 | 2.45 | 270 | 0.0182 |
| 0.0145 | 2.54 | 280 | 0.0161 |
| 0.0212 | 2.63 | 290 | 0.0146 |
| 0.0151 | 2.72 | 300 | 0.0142 |
| 0.013 | 2.81 | 310 | 0.0131 |
| 0.0182 | 2.9 | 320 | 0.0129 |
| 0.014 | 2.99 | 330 | 0.0126 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0430HMA18", "results": []}]} | Litzy619/O0430HMA18 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T13:25:59+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0430HMA19
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.313 | 0.09 | 10 | 0.1802 |
| 0.1606 | 0.18 | 20 | 0.1569 |
| 0.1542 | 0.27 | 30 | 0.1539 |
| 0.1561 | 0.36 | 40 | 0.1548 |
| 0.1506 | 0.45 | 50 | 0.1503 |
| 0.1507 | 0.54 | 60 | 0.1485 |
| 0.1516 | 0.63 | 70 | 0.1478 |
| 0.1497 | 0.73 | 80 | 0.1605 |
| 0.1476 | 0.82 | 90 | 0.1501 |
| 0.1474 | 0.91 | 100 | 0.1492 |
| 0.458 | 1.0 | 110 | 0.1739 |
| 0.1648 | 1.09 | 120 | 0.1543 |
| 0.5694 | 1.18 | 130 | 0.1570 |
| 0.1614 | 1.27 | 140 | 0.1608 |
| 1.4211 | 1.36 | 150 | 0.1518 |
| 0.1489 | 1.45 | 160 | 0.1496 |
| 0.151 | 1.54 | 170 | 0.1514 |
| 0.4185 | 1.63 | 180 | 0.6224 |
| 0.6333 | 1.72 | 190 | 0.1473 |
| 0.1485 | 1.81 | 200 | 0.1536 |
| 0.1557 | 1.9 | 210 | 0.1487 |
| 0.1499 | 1.99 | 220 | 0.1507 |
| 0.1509 | 2.08 | 230 | 0.1486 |
| 0.1448 | 2.18 | 240 | 0.1475 |
| 0.145 | 2.27 | 250 | 0.1497 |
| 0.1464 | 2.36 | 260 | 0.1487 |
| 0.1452 | 2.45 | 270 | 0.1472 |
| 0.1436 | 2.54 | 280 | 0.1468 |
| 0.1441 | 2.63 | 290 | 0.1477 |
| 0.1463 | 2.72 | 300 | 0.1466 |
| 0.1455 | 2.81 | 310 | 0.1464 |
| 0.146 | 2.9 | 320 | 0.1465 |
| 0.1467 | 2.99 | 330 | 0.1466 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0430HMA19", "results": []}]} | Litzy619/O0430HMA19 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T13:26:20+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0430HMA20
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3072 | 0.09 | 10 | 0.1919 |
| 0.1572 | 0.18 | 20 | 0.1540 |
| 0.1494 | 0.27 | 30 | 0.1654 |
| 0.1558 | 0.36 | 40 | 0.1537 |
| 0.1517 | 0.45 | 50 | 0.1553 |
| 0.1513 | 0.54 | 60 | 0.1508 |
| 0.1532 | 0.63 | 70 | 0.1477 |
| 0.1499 | 0.73 | 80 | 0.1557 |
| 0.1469 | 0.82 | 90 | 0.1484 |
| 0.1471 | 0.91 | 100 | 0.1501 |
| 0.1499 | 1.0 | 110 | 0.1513 |
| 0.1462 | 1.09 | 120 | 0.1486 |
| 0.1473 | 1.18 | 130 | 0.1539 |
| 0.1475 | 1.27 | 140 | 0.1490 |
| 0.1485 | 1.36 | 150 | 0.1485 |
| 0.1371 | 1.45 | 160 | 0.1344 |
| 0.6524 | 1.54 | 170 | 0.4249 |
| 0.1586 | 1.63 | 180 | 0.0785 |
| 0.0783 | 1.72 | 190 | 0.0804 |
| 0.0752 | 1.81 | 200 | 0.0712 |
| 0.0658 | 1.9 | 210 | 1.1126 |
| 0.1685 | 1.99 | 220 | 0.0589 |
| 0.0605 | 2.08 | 230 | 0.0574 |
| 0.0497 | 2.18 | 240 | 0.0514 |
| 0.0475 | 2.27 | 250 | 0.0463 |
| 0.0494 | 2.36 | 260 | 0.0429 |
| 0.0369 | 2.45 | 270 | 0.0338 |
| 0.0261 | 2.54 | 280 | 0.0276 |
| 0.0349 | 2.63 | 290 | 0.0251 |
| 0.0278 | 2.72 | 300 | 0.0250 |
| 0.0248 | 2.81 | 310 | 0.0220 |
| 0.0269 | 2.9 | 320 | 0.0220 |
| 0.0246 | 2.99 | 330 | 0.0215 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0430HMA20", "results": []}]} | Litzy619/O0430HMA20 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T13:26:22+00:00 |
null | transformers |
# Model details
These are the q4 in GGUF of a quick experiment on llamafied phi-3 with only 1000 orpo steps from an azureml translated german orca binarized-dataset (johannhartmann/mistralorpo), with original phi-3 prompt template. The immediate result is not really good, but also not bad enough to disencourage further experiments.
# Benchmark results
This was an experiment on a german dataset snippet which, as expected, worsened results on english benchmarks:
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.40|
|AI2 Reasoning Challenge (25-Shot)|60.41|
|HellaSwag (10-Shot) |78.37|
|MMLU (5-Shot) |65.26|
|TruthfulQA (0-shot) |49.76|
|Winogrande (5-shot) |70.24|
|GSM8k (5-shot) |62.32|
On german EQ-Bench (v2_de) 51.82 (insignificant over 51.41 for original llamafied but significantly better than intermediate cstr/phi-3-orpo-v8_16 which after initial 150 test steps achieved 46.38) but with still only 164/171 correctly parsed.
Note: We can improve the correctness of parsing, i.a., by only a few SFT steps, as shown with cas/phi3-mini-4k-llamafied-sft-v3 (170/171 correct but with then only 39.46 score in v2_de, which was also an experiment in changing the prompt template).
All that was quickly done with bnb and q4 quants only, which might, in theory, affect especially such small dense models significantly.
But it served the intention for both proof-of-concept-experiments at least. Probably it would easily be possible to further improve results, but that would take some time and compute.
# Training setup
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
| {"language": ["en", "de"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "orpo"], "base_model": "cstr/phi-3-orpo-v8_16"} | cstr/phi-3-orpo-v9_16-GGUF | null | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"orpo",
"en",
"de",
"base_model:cstr/phi-3-orpo-v8_16",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:26:29+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta2.2_kl0.01-epoch3 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:26:33+00:00 |
null | null | {} | fabst/openai-whisper-tiny-swiss-german-1714483336 | null | [
"region:us"
] | null | 2024-04-30T13:26:47+00:00 |
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0430HMA21
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4312 | 0.09 | 10 | 0.1874 |
| 0.1634 | 0.18 | 20 | 0.1497 |
| 0.1488 | 0.27 | 30 | 0.1613 |
| 0.1545 | 0.36 | 40 | 0.1540 |
| 0.1508 | 0.45 | 50 | 0.1523 |
| 0.1531 | 0.54 | 60 | 0.1514 |
| 0.1533 | 0.63 | 70 | 0.1467 |
| 0.1508 | 0.73 | 80 | 0.1608 |
| 0.1482 | 0.82 | 90 | 0.1485 |
| 0.1462 | 0.91 | 100 | 0.1425 |
| 0.1171 | 1.0 | 110 | 0.9807 |
| 0.9406 | 1.09 | 120 | 0.1657 |
| 0.3022 | 1.18 | 130 | 0.2118 |
| 0.173 | 1.27 | 140 | 0.2822 |
| 0.1207 | 1.36 | 150 | 0.0716 |
| 0.067 | 1.45 | 160 | 0.0495 |
| 0.0569 | 1.54 | 170 | 0.0470 |
| 0.0515 | 1.63 | 180 | 0.0446 |
| 0.0397 | 1.72 | 190 | 0.0745 |
| 0.0345 | 1.81 | 200 | 0.0217 |
| 0.0199 | 1.9 | 210 | 0.0118 |
| 0.0097 | 1.99 | 220 | 0.0128 |
| 0.025 | 2.08 | 230 | 0.0168 |
| 0.0139 | 2.18 | 240 | 0.0121 |
| 0.0108 | 2.27 | 250 | 0.0133 |
| 0.0148 | 2.36 | 260 | 0.0100 |
| 0.0105 | 2.45 | 270 | 0.0065 |
| 0.0058 | 2.54 | 280 | 0.0065 |
| 0.0147 | 2.63 | 290 | 0.0061 |
| 0.0068 | 2.72 | 300 | 0.0061 |
| 0.0079 | 2.81 | 310 | 0.0058 |
| 0.0105 | 2.9 | 320 | 0.0059 |
| 0.0067 | 2.99 | 330 | 0.0059 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0430HMA21", "results": []}]} | Litzy619/O0430HMA21 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T13:27:23+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0430HMA22
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4451 | 0.09 | 10 | 0.1875 |
| 0.166 | 0.18 | 20 | 0.1559 |
| 0.1487 | 0.27 | 30 | 0.1614 |
| 0.1558 | 0.36 | 40 | 0.1541 |
| 0.1509 | 0.45 | 50 | 0.1503 |
| 0.154 | 0.54 | 60 | 0.1506 |
| 0.1515 | 0.63 | 70 | 0.1472 |
| 0.1486 | 0.73 | 80 | 0.1571 |
| 0.1465 | 0.82 | 90 | 0.1489 |
| 0.1486 | 0.91 | 100 | 0.1494 |
| 0.1512 | 1.0 | 110 | 0.1504 |
| 0.1451 | 1.09 | 120 | 0.1458 |
| 0.1363 | 1.18 | 130 | 0.1194 |
| 0.4695 | 1.27 | 140 | 0.0859 |
| 0.2213 | 1.36 | 150 | 0.1021 |
| 0.1433 | 1.45 | 160 | 0.1743 |
| 0.0896 | 1.54 | 170 | 0.0789 |
| 0.0705 | 1.63 | 180 | 0.0677 |
| 0.0746 | 1.72 | 190 | 0.0697 |
| 0.0572 | 1.81 | 200 | 0.0534 |
| 0.0524 | 1.9 | 210 | 0.0385 |
| 0.0511 | 1.99 | 220 | 0.0436 |
| 0.0401 | 2.08 | 230 | 0.0288 |
| 0.0262 | 2.18 | 240 | 0.0192 |
| 0.0223 | 2.27 | 250 | 0.0179 |
| 0.0254 | 2.36 | 260 | 0.0184 |
| 0.0184 | 2.45 | 270 | 0.0169 |
| 0.0124 | 2.54 | 280 | 0.0137 |
| 0.0199 | 2.63 | 290 | 0.0124 |
| 0.0158 | 2.72 | 300 | 0.0128 |
| 0.0124 | 2.81 | 310 | 0.0115 |
| 0.0159 | 2.9 | 320 | 0.0125 |
| 0.0144 | 2.99 | 330 | 0.0116 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0430HMA22", "results": []}]} | Litzy619/O0430HMA22 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T13:27:30+00:00 |
automatic-speech-recognition | transformers | {} | adityarra07/whisper-med-LoRA_noise_128_256_45k_merged_ckpt2 | null | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:27:44+00:00 |
|
null | null | {} | buzoff666/wc456 | null | [
"region:us"
] | null | 2024-04-30T13:29:21+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_harmlessharmless_human_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "dpo_harmlessharmless_human_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06", "results": []}]} | Holarissun/dpo_harmlessharmless_human_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"region:us"
] | null | 2024-04-30T13:29:38+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_harmlessharmless_human_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "dpo_harmlessharmless_human_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05", "results": []}]} | Holarissun/dpo_harmlessharmless_human_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"region:us"
] | null | 2024-04-30T13:29:49+00:00 |
null | null | {"license": "mit"} | ramanan-techlover/smart-yoga | null | [
"license:mit",
"region:us"
] | null | 2024-04-30T13:30:06+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt1B_domar_finetune_1epoch
This model is a fine-tuned version of [AI-Sweden-Models/gpt-sw3-1.3b](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9146 | 0.79 | 200 | 0.8557 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.2.0+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "AI-Sweden-Models/gpt-sw3-1.3b", "model-index": [{"name": "gpt1B_domar_finetune_1epoch", "results": []}]} | thorirhrafn/gpt1B_domar_finetune_1epoch | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:AI-Sweden-Models/gpt-sw3-1.3b",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T13:31:34+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** mohammedriza-rahman
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | mohammedriza-rahman/updated | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:32:22+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_2_0_slavicbert
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3354
- Precision: 0.8427
- Recall: 0.8737
- F1: 0.8579
- Accuracy: 0.9553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.288 | 2.22 | 1000 | 0.2461 | 0.7705 | 0.7926 | 0.7814 | 0.9413 |
| 0.1551 | 4.44 | 2000 | 0.2270 | 0.8116 | 0.8444 | 0.8277 | 0.9503 |
| 0.0963 | 6.67 | 3000 | 0.2220 | 0.8181 | 0.8623 | 0.8396 | 0.9533 |
| 0.0619 | 8.89 | 4000 | 0.2520 | 0.8202 | 0.8598 | 0.8395 | 0.9507 |
| 0.044 | 11.11 | 5000 | 0.2613 | 0.8332 | 0.8680 | 0.8502 | 0.9535 |
| 0.0283 | 13.33 | 6000 | 0.2734 | 0.8377 | 0.8673 | 0.8522 | 0.9546 |
| 0.0227 | 15.56 | 7000 | 0.2908 | 0.8390 | 0.8687 | 0.8536 | 0.9546 |
| 0.0173 | 17.78 | 8000 | 0.3083 | 0.8393 | 0.8670 | 0.8529 | 0.9528 |
| 0.013 | 20.0 | 9000 | 0.3238 | 0.8333 | 0.8673 | 0.8500 | 0.9522 |
| 0.0103 | 22.22 | 10000 | 0.3352 | 0.8325 | 0.8712 | 0.8515 | 0.9539 |
| 0.0091 | 24.44 | 11000 | 0.3299 | 0.8400 | 0.8655 | 0.8526 | 0.9542 |
| 0.0073 | 26.67 | 12000 | 0.3376 | 0.8387 | 0.8666 | 0.8524 | 0.9535 |
| 0.0065 | 28.89 | 13000 | 0.3354 | 0.8427 | 0.8737 | 0.8579 | 0.9553 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "DeepPavlov/bert-base-bg-cs-pl-ru-cased", "model-index": [{"name": "CNEC_2_0_slavicbert", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8427043808209728, "name": "Precision"}, {"type": "recall", "value": 0.8737482117310443, "name": "Recall"}, {"type": "f1", "value": 0.8579455662862159, "name": "F1"}, {"type": "accuracy", "value": 0.9552753162160115, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_2_0_slavicbert | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:32:39+00:00 |
text-classification | transformers | {} | skelley/Day_to_day_tasks | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:32:59+00:00 |
|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_1_1_slavicbert
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3720
- Precision: 0.8513
- Recall: 0.8671
- F1: 0.8591
- Accuracy: 0.9509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3658 | 0.85 | 1000 | 0.2671 | 0.8101 | 0.8172 | 0.8136 | 0.9366 |
| 0.227 | 1.7 | 2000 | 0.2624 | 0.8190 | 0.8172 | 0.8181 | 0.9380 |
| 0.141 | 2.56 | 3000 | 0.2474 | 0.8317 | 0.8424 | 0.8370 | 0.9448 |
| 0.092 | 3.41 | 4000 | 0.2498 | 0.8412 | 0.8534 | 0.8472 | 0.9460 |
| 0.0839 | 4.26 | 5000 | 0.2689 | 0.8438 | 0.8583 | 0.8510 | 0.9489 |
| 0.0698 | 5.11 | 6000 | 0.2830 | 0.8420 | 0.8539 | 0.8479 | 0.9473 |
| 0.0507 | 5.96 | 7000 | 0.2902 | 0.8359 | 0.8503 | 0.8431 | 0.9468 |
| 0.0344 | 6.81 | 8000 | 0.3221 | 0.8310 | 0.8512 | 0.8410 | 0.9478 |
| 0.0249 | 7.67 | 9000 | 0.3262 | 0.8444 | 0.8508 | 0.8476 | 0.9478 |
| 0.0185 | 8.52 | 10000 | 0.3214 | 0.8458 | 0.8525 | 0.8492 | 0.9502 |
| 0.0151 | 9.37 | 11000 | 0.3399 | 0.8382 | 0.8578 | 0.8479 | 0.9499 |
| 0.01 | 10.22 | 12000 | 0.3348 | 0.8385 | 0.8574 | 0.8478 | 0.9492 |
| 0.0086 | 11.07 | 13000 | 0.3636 | 0.8395 | 0.8543 | 0.8468 | 0.9479 |
| 0.0092 | 11.93 | 14000 | 0.3644 | 0.8419 | 0.8578 | 0.8498 | 0.9485 |
| 0.0058 | 12.78 | 15000 | 0.3624 | 0.8450 | 0.8618 | 0.8533 | 0.9503 |
| 0.0032 | 13.63 | 16000 | 0.3703 | 0.8483 | 0.8614 | 0.8548 | 0.9507 |
| 0.003 | 14.48 | 17000 | 0.3720 | 0.8513 | 0.8671 | 0.8591 | 0.9509 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "DeepPavlov/bert-base-bg-cs-pl-ru-cased", "model-index": [{"name": "CNEC_1_1_slavicbert", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8513220632856524, "name": "Precision"}, {"type": "recall", "value": 0.8671081677704194, "name": "Recall"}, {"type": "f1", "value": 0.8591426071741033, "name": "F1"}, {"type": "accuracy", "value": 0.9509352959214965, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_1_1_slavicbert | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:33:10+00:00 |
text-generation | transformers | {} | ajtamayoh/GPT2_DocBot_SonatafyAI_V4 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T13:33:12+00:00 |
|
text-to-image | diffusers |
# AutoTrain LoRA DreamBooth - AmilaUvaz/Michelle
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on <Michelle, a 24-year-old traveler with brown skin, rectangle face, brown eyes, Armond-shaped eyebrows, long wavy brown hair)> using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
| {"license": "openrail++", "tags": ["autotrain", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "runwayml/stable-diffusion-v1-5", "instance_prompt": "<Michelle, a 24-year-old traveler with brown skin, rectangle face, brown eyes, Armond-shaped eyebrows, long wavy brown hair)>"} | AmilaUvaz/Michelle | null | [
"diffusers",
"autotrain",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"license:openrail++",
"region:us"
] | null | 2024-04-30T13:33:17+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/shyamieee/Maverick-v2.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "shyamieee/Maverick-v2.0", "quantized_by": "mradermacher"} | mradermacher/Maverick-v2.0-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:shyamieee/Maverick-v2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:33:24+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_2_0_Supertypes_slavicbert
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2859
- Precision: 0.8603
- Recall: 0.8905
- F1: 0.8752
- Accuracy: 0.9654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1989 | 1.0 | 1799 | 0.1639 | 0.8057 | 0.8410 | 0.8230 | 0.9544 |
| 0.1512 | 2.0 | 3598 | 0.1679 | 0.8105 | 0.8550 | 0.8322 | 0.9550 |
| 0.1085 | 3.0 | 5397 | 0.1516 | 0.8253 | 0.8662 | 0.8452 | 0.9582 |
| 0.0823 | 4.0 | 7196 | 0.1586 | 0.8374 | 0.8765 | 0.8565 | 0.9608 |
| 0.0529 | 5.0 | 8995 | 0.1802 | 0.8346 | 0.8670 | 0.8505 | 0.9602 |
| 0.0507 | 6.0 | 10794 | 0.2033 | 0.8249 | 0.8699 | 0.8468 | 0.9603 |
| 0.0441 | 7.0 | 12593 | 0.2032 | 0.8401 | 0.8724 | 0.8559 | 0.9614 |
| 0.0271 | 8.0 | 14392 | 0.2247 | 0.8450 | 0.8740 | 0.8593 | 0.9604 |
| 0.0289 | 9.0 | 16191 | 0.2319 | 0.8385 | 0.8794 | 0.8585 | 0.9613 |
| 0.0214 | 10.0 | 17990 | 0.2623 | 0.8462 | 0.8703 | 0.8581 | 0.9609 |
| 0.0173 | 11.0 | 19789 | 0.2553 | 0.8432 | 0.8748 | 0.8587 | 0.9614 |
| 0.0149 | 12.0 | 21588 | 0.2760 | 0.8582 | 0.8827 | 0.8703 | 0.9631 |
| 0.0143 | 13.0 | 23387 | 0.2748 | 0.8530 | 0.8843 | 0.8684 | 0.9630 |
| 0.0095 | 14.0 | 25186 | 0.2796 | 0.8543 | 0.8864 | 0.8701 | 0.9632 |
| 0.0049 | 15.0 | 26985 | 0.2944 | 0.8512 | 0.8810 | 0.8658 | 0.9627 |
| 0.0047 | 16.0 | 28784 | 0.2836 | 0.8524 | 0.8848 | 0.8683 | 0.9644 |
| 0.0047 | 17.0 | 30583 | 0.2902 | 0.8490 | 0.8827 | 0.8655 | 0.9646 |
| 0.0039 | 18.0 | 32382 | 0.2888 | 0.8603 | 0.8881 | 0.8740 | 0.9650 |
| 0.0026 | 19.0 | 34181 | 0.2917 | 0.8585 | 0.8897 | 0.8738 | 0.9644 |
| 0.0047 | 20.0 | 35980 | 0.2859 | 0.8603 | 0.8905 | 0.8752 | 0.9654 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "DeepPavlov/bert-base-bg-cs-pl-ru-cased", "model-index": [{"name": "CNEC_2_0_Supertypes_slavicbert", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8603351955307262, "name": "Precision"}, {"type": "recall", "value": 0.8905410987195373, "name": "Recall"}, {"type": "f1", "value": 0.8751775928556932, "name": "F1"}, {"type": "accuracy", "value": 0.9654245247292282, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_2_0_Supertypes_slavicbert | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:33:26+00:00 |
null | null | {} | ToeBoe/luntik | null | [
"region:us"
] | null | 2024-04-30T13:33:54+00:00 |
|
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | MohammadKarami/whole-electra | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:33:54+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_1_1_Supertypes_slavicbert
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2993
- Precision: 0.8427
- Recall: 0.8811
- F1: 0.8615
- Accuracy: 0.9511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4662 | 1.7 | 500 | 0.2442 | 0.7608 | 0.8311 | 0.7944 | 0.9353 |
| 0.2083 | 3.4 | 1000 | 0.2039 | 0.8150 | 0.8744 | 0.8437 | 0.9467 |
| 0.1504 | 5.1 | 1500 | 0.1902 | 0.8234 | 0.8740 | 0.8480 | 0.9517 |
| 0.11 | 6.8 | 2000 | 0.2027 | 0.8328 | 0.8762 | 0.8539 | 0.9519 |
| 0.0883 | 8.5 | 2500 | 0.2176 | 0.8361 | 0.8820 | 0.8584 | 0.9509 |
| 0.0708 | 10.2 | 3000 | 0.2297 | 0.8405 | 0.8828 | 0.8611 | 0.9510 |
| 0.0615 | 11.9 | 3500 | 0.2429 | 0.8361 | 0.8793 | 0.8571 | 0.9519 |
| 0.0471 | 13.61 | 4000 | 0.2546 | 0.8340 | 0.8775 | 0.8552 | 0.9504 |
| 0.0428 | 15.31 | 4500 | 0.2718 | 0.8440 | 0.8775 | 0.8604 | 0.9495 |
| 0.0358 | 17.01 | 5000 | 0.2730 | 0.8401 | 0.8758 | 0.8576 | 0.9502 |
| 0.0325 | 18.71 | 5500 | 0.2793 | 0.8421 | 0.8815 | 0.8613 | 0.9501 |
| 0.0277 | 20.41 | 6000 | 0.2984 | 0.8446 | 0.8842 | 0.8639 | 0.9504 |
| 0.0245 | 22.11 | 6500 | 0.2987 | 0.8454 | 0.8802 | 0.8625 | 0.9507 |
| 0.0224 | 23.81 | 7000 | 0.2993 | 0.8427 | 0.8811 | 0.8615 | 0.9511 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "DeepPavlov/bert-base-bg-cs-pl-ru-cased", "model-index": [{"name": "CNEC_1_1_Supertypes_slavicbert", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8427061310782241, "name": "Precision"}, {"type": "recall", "value": 0.881078691423519, "name": "Recall"}, {"type": "f1", "value": 0.8614653122973849, "name": "F1"}, {"type": "accuracy", "value": 0.9510886231217418, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_1_1_Supertypes_slavicbert | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:35:10+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_2_0_ext_slavicbert
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2252
- Precision: 0.8578
- Recall: 0.8864
- F1: 0.8719
- Accuracy: 0.9697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1347 | 4.46 | 1000 | 0.1375 | 0.8279 | 0.8620 | 0.8446 | 0.9656 |
| 0.0681 | 8.93 | 2000 | 0.1519 | 0.8345 | 0.8710 | 0.8524 | 0.9668 |
| 0.0406 | 13.39 | 3000 | 0.1663 | 0.8519 | 0.8789 | 0.8652 | 0.9679 |
| 0.0276 | 17.86 | 4000 | 0.1719 | 0.8623 | 0.8888 | 0.8754 | 0.9690 |
| 0.02 | 22.32 | 5000 | 0.1920 | 0.8505 | 0.8809 | 0.8654 | 0.9686 |
| 0.015 | 26.79 | 6000 | 0.1984 | 0.8570 | 0.8893 | 0.8729 | 0.9693 |
| 0.0108 | 31.25 | 7000 | 0.2048 | 0.8587 | 0.8864 | 0.8723 | 0.9692 |
| 0.0092 | 35.71 | 8000 | 0.2179 | 0.8606 | 0.8888 | 0.8745 | 0.9696 |
| 0.0076 | 40.18 | 9000 | 0.2252 | 0.8564 | 0.8878 | 0.8718 | 0.9696 |
| 0.0057 | 44.64 | 10000 | 0.2262 | 0.8571 | 0.8873 | 0.8720 | 0.9698 |
| 0.0054 | 49.11 | 11000 | 0.2252 | 0.8578 | 0.8864 | 0.8719 | 0.9697 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "DeepPavlov/bert-base-bg-cs-pl-ru-cased", "model-index": [{"name": "CNEC_2_0_ext_slavicbert", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8578290105667628, "name": "Precision"}, {"type": "recall", "value": 0.8863523573200992, "name": "Recall"}, {"type": "f1", "value": 0.8718574566756162, "name": "F1"}, {"type": "accuracy", "value": 0.969659869151012, "name": "Accuracy"}]}]}]} | stulcrad/CNEC_2_0_ext_slavicbert | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:35:24+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Alphacode-AI/AlphaMist7B-slr-v4
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaMist7B-slr-v4-GGUF/resolve/main/AlphaMist7B-slr-v4.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "datasets": ["Custom_datasets"], "base_model": "Alphacode-AI/AlphaMist7B-slr-v4", "quantized_by": "mradermacher"} | mradermacher/AlphaMist7B-slr-v4-GGUF | null | [
"transformers",
"gguf",
"en",
"dataset:Custom_datasets",
"base_model:Alphacode-AI/AlphaMist7B-slr-v4",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:35:48+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.