modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
xw17/TinyLlama-1.1B-Chat-v1.0_finetuned_1_def_lora | xw17 | 2025-04-02T06:57:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-31T02:49:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tazza991/nomic-embed-text-v1.5-Q5_K_M-GGUF | Tazza991 | 2025-04-02T06:56:11Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"transformers",
"transformers.js",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:nomic-ai/nomic-embed-text-v1.5",
"base_model:quantized:nomic-ai/nomic-embed-text-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-02T06:56:08Z | ---
base_model: nomic-ai/nomic-embed-text-v1.5
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- mteb
- transformers
- transformers.js
- llama-cpp
- gguf-my-repo
model-index:
- name: epoch_0_model
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.20895522388058
- type: ap
value: 38.57605549557802
- type: f1
value: 69.35586565857854
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.8144
- type: ap
value: 88.65222882032363
- type: f1
value: 91.80426301643274
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.162000000000006
- type: f1
value: 46.59329642263158
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.253
- type: map_at_10
value: 38.962
- type: map_at_100
value: 40.081
- type: map_at_1000
value: 40.089000000000006
- type: map_at_3
value: 33.499
- type: map_at_5
value: 36.351
- type: mrr_at_1
value: 24.609
- type: mrr_at_10
value: 39.099000000000004
- type: mrr_at_100
value: 40.211000000000006
- type: mrr_at_1000
value: 40.219
- type: mrr_at_3
value: 33.677
- type: mrr_at_5
value: 36.469
- type: ndcg_at_1
value: 24.253
- type: ndcg_at_10
value: 48.010999999999996
- type: ndcg_at_100
value: 52.756
- type: ndcg_at_1000
value: 52.964999999999996
- type: ndcg_at_3
value: 36.564
- type: ndcg_at_5
value: 41.711999999999996
- type: precision_at_1
value: 24.253
- type: precision_at_10
value: 7.738
- type: precision_at_100
value: 0.98
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 15.149000000000001
- type: precision_at_5
value: 11.593
- type: recall_at_1
value: 24.253
- type: recall_at_10
value: 77.383
- type: recall_at_100
value: 98.009
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 45.448
- type: recall_at_5
value: 57.965999999999994
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.69069567851087
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.35185490976283
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.71274951450321
- type: mrr
value: 76.06032625423207
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.73980520022269
- type: cos_sim_spearman
value: 84.24649792685918
- type: euclidean_pearson
value: 85.85197641158186
- type: euclidean_spearman
value: 84.24649792685918
- type: manhattan_pearson
value: 86.26809552711346
- type: manhattan_spearman
value: 84.56397504030865
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.25324675324674
- type: f1
value: 84.17872280892557
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.770253446400886
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.94307095497281
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.164
- type: map_at_10
value: 42.641
- type: map_at_100
value: 43.947
- type: map_at_1000
value: 44.074999999999996
- type: map_at_3
value: 39.592
- type: map_at_5
value: 41.204
- type: mrr_at_1
value: 39.628
- type: mrr_at_10
value: 48.625
- type: mrr_at_100
value: 49.368
- type: mrr_at_1000
value: 49.413000000000004
- type: mrr_at_3
value: 46.400000000000006
- type: mrr_at_5
value: 47.68
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 48.564
- type: ndcg_at_100
value: 53.507000000000005
- type: ndcg_at_1000
value: 55.635999999999996
- type: ndcg_at_3
value: 44.471
- type: ndcg_at_5
value: 46.137
- type: precision_at_1
value: 39.628
- type: precision_at_10
value: 8.856
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.268
- type: precision_at_5
value: 14.649000000000001
- type: recall_at_1
value: 32.164
- type: recall_at_10
value: 59.609
- type: recall_at_100
value: 80.521
- type: recall_at_1000
value: 94.245
- type: recall_at_3
value: 46.521
- type: recall_at_5
value: 52.083999999999996
- type: map_at_1
value: 31.526
- type: map_at_10
value: 41.581
- type: map_at_100
value: 42.815999999999995
- type: map_at_1000
value: 42.936
- type: map_at_3
value: 38.605000000000004
- type: map_at_5
value: 40.351
- type: mrr_at_1
value: 39.489999999999995
- type: mrr_at_10
value: 47.829
- type: mrr_at_100
value: 48.512
- type: mrr_at_1000
value: 48.552
- type: mrr_at_3
value: 45.754
- type: mrr_at_5
value: 46.986
- type: ndcg_at_1
value: 39.489999999999995
- type: ndcg_at_10
value: 47.269
- type: ndcg_at_100
value: 51.564
- type: ndcg_at_1000
value: 53.53099999999999
- type: ndcg_at_3
value: 43.301
- type: ndcg_at_5
value: 45.239000000000004
- type: precision_at_1
value: 39.489999999999995
- type: precision_at_10
value: 8.93
- type: precision_at_100
value: 1.415
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 20.892
- type: precision_at_5
value: 14.865999999999998
- type: recall_at_1
value: 31.526
- type: recall_at_10
value: 56.76
- type: recall_at_100
value: 75.029
- type: recall_at_1000
value: 87.491
- type: recall_at_3
value: 44.786
- type: recall_at_5
value: 50.254
- type: map_at_1
value: 40.987
- type: map_at_10
value: 52.827
- type: map_at_100
value: 53.751000000000005
- type: map_at_1000
value: 53.81
- type: map_at_3
value: 49.844
- type: map_at_5
value: 51.473
- type: mrr_at_1
value: 46.833999999999996
- type: mrr_at_10
value: 56.389
- type: mrr_at_100
value: 57.003
- type: mrr_at_1000
value: 57.034
- type: mrr_at_3
value: 54.17999999999999
- type: mrr_at_5
value: 55.486999999999995
- type: ndcg_at_1
value: 46.833999999999996
- type: ndcg_at_10
value: 58.372
- type: ndcg_at_100
value: 62.068
- type: ndcg_at_1000
value: 63.288
- type: ndcg_at_3
value: 53.400000000000006
- type: ndcg_at_5
value: 55.766000000000005
- type: precision_at_1
value: 46.833999999999996
- type: precision_at_10
value: 9.191
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.448
- type: precision_at_5
value: 15.862000000000002
- type: recall_at_1
value: 40.987
- type: recall_at_10
value: 71.146
- type: recall_at_100
value: 87.035
- type: recall_at_1000
value: 95.633
- type: recall_at_3
value: 58.025999999999996
- type: recall_at_5
value: 63.815999999999995
- type: map_at_1
value: 24.587
- type: map_at_10
value: 33.114
- type: map_at_100
value: 34.043
- type: map_at_1000
value: 34.123999999999995
- type: map_at_3
value: 30.45
- type: map_at_5
value: 31.813999999999997
- type: mrr_at_1
value: 26.554
- type: mrr_at_10
value: 35.148
- type: mrr_at_100
value: 35.926
- type: mrr_at_1000
value: 35.991
- type: mrr_at_3
value: 32.599000000000004
- type: mrr_at_5
value: 33.893
- type: ndcg_at_1
value: 26.554
- type: ndcg_at_10
value: 38.132
- type: ndcg_at_100
value: 42.78
- type: ndcg_at_1000
value: 44.919
- type: ndcg_at_3
value: 32.833
- type: ndcg_at_5
value: 35.168
- type: precision_at_1
value: 26.554
- type: precision_at_10
value: 5.921
- type: precision_at_100
value: 0.8659999999999999
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 13.861
- type: precision_at_5
value: 9.605
- type: recall_at_1
value: 24.587
- type: recall_at_10
value: 51.690000000000005
- type: recall_at_100
value: 73.428
- type: recall_at_1000
value: 89.551
- type: recall_at_3
value: 37.336999999999996
- type: recall_at_5
value: 43.047000000000004
- type: map_at_1
value: 16.715
- type: map_at_10
value: 24.251
- type: map_at_100
value: 25.326999999999998
- type: map_at_1000
value: 25.455
- type: map_at_3
value: 21.912000000000003
- type: map_at_5
value: 23.257
- type: mrr_at_1
value: 20.274
- type: mrr_at_10
value: 28.552
- type: mrr_at_100
value: 29.42
- type: mrr_at_1000
value: 29.497
- type: mrr_at_3
value: 26.14
- type: mrr_at_5
value: 27.502
- type: ndcg_at_1
value: 20.274
- type: ndcg_at_10
value: 29.088
- type: ndcg_at_100
value: 34.293
- type: ndcg_at_1000
value: 37.271
- type: ndcg_at_3
value: 24.708
- type: ndcg_at_5
value: 26.809
- type: precision_at_1
value: 20.274
- type: precision_at_10
value: 5.361
- type: precision_at_100
value: 0.915
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.733
- type: precision_at_5
value: 8.556999999999999
- type: recall_at_1
value: 16.715
- type: recall_at_10
value: 39.587
- type: recall_at_100
value: 62.336000000000006
- type: recall_at_1000
value: 83.453
- type: recall_at_3
value: 27.839999999999996
- type: recall_at_5
value: 32.952999999999996
- type: map_at_1
value: 28.793000000000003
- type: map_at_10
value: 38.582
- type: map_at_100
value: 39.881
- type: map_at_1000
value: 39.987
- type: map_at_3
value: 35.851
- type: map_at_5
value: 37.289
- type: mrr_at_1
value: 34.455999999999996
- type: mrr_at_10
value: 43.909
- type: mrr_at_100
value: 44.74
- type: mrr_at_1000
value: 44.786
- type: mrr_at_3
value: 41.659
- type: mrr_at_5
value: 43.010999999999996
- type: ndcg_at_1
value: 34.455999999999996
- type: ndcg_at_10
value: 44.266
- type: ndcg_at_100
value: 49.639
- type: ndcg_at_1000
value: 51.644
- type: ndcg_at_3
value: 39.865
- type: ndcg_at_5
value: 41.887
- type: precision_at_1
value: 34.455999999999996
- type: precision_at_10
value: 7.843999999999999
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 18.831999999999997
- type: precision_at_5
value: 13.147
- type: recall_at_1
value: 28.793000000000003
- type: recall_at_10
value: 55.68300000000001
- type: recall_at_100
value: 77.99000000000001
- type: recall_at_1000
value: 91.183
- type: recall_at_3
value: 43.293
- type: recall_at_5
value: 48.618
- type: map_at_1
value: 25.907000000000004
- type: map_at_10
value: 35.519
- type: map_at_100
value: 36.806
- type: map_at_1000
value: 36.912
- type: map_at_3
value: 32.748
- type: map_at_5
value: 34.232
- type: mrr_at_1
value: 31.621
- type: mrr_at_10
value: 40.687
- type: mrr_at_100
value: 41.583
- type: mrr_at_1000
value: 41.638999999999996
- type: mrr_at_3
value: 38.527
- type: mrr_at_5
value: 39.612
- type: ndcg_at_1
value: 31.621
- type: ndcg_at_10
value: 41.003
- type: ndcg_at_100
value: 46.617999999999995
- type: ndcg_at_1000
value: 48.82
- type: ndcg_at_3
value: 36.542
- type: ndcg_at_5
value: 38.368
- type: precision_at_1
value: 31.621
- type: precision_at_10
value: 7.396999999999999
- type: precision_at_100
value: 1.191
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 17.39
- type: precision_at_5
value: 12.1
- type: recall_at_1
value: 25.907000000000004
- type: recall_at_10
value: 52.115
- type: recall_at_100
value: 76.238
- type: recall_at_1000
value: 91.218
- type: recall_at_3
value: 39.417
- type: recall_at_5
value: 44.435
- type: map_at_1
value: 25.732166666666668
- type: map_at_10
value: 34.51616666666667
- type: map_at_100
value: 35.67241666666666
- type: map_at_1000
value: 35.78675
- type: map_at_3
value: 31.953416666666662
- type: map_at_5
value: 33.333
- type: mrr_at_1
value: 30.300166666666673
- type: mrr_at_10
value: 38.6255
- type: mrr_at_100
value: 39.46183333333334
- type: mrr_at_1000
value: 39.519999999999996
- type: mrr_at_3
value: 36.41299999999999
- type: mrr_at_5
value: 37.6365
- type: ndcg_at_1
value: 30.300166666666673
- type: ndcg_at_10
value: 39.61466666666667
- type: ndcg_at_100
value: 44.60808333333334
- type: ndcg_at_1000
value: 46.91708333333334
- type: ndcg_at_3
value: 35.26558333333333
- type: ndcg_at_5
value: 37.220000000000006
- type: precision_at_1
value: 30.300166666666673
- type: precision_at_10
value: 6.837416666666667
- type: precision_at_100
value: 1.10425
- type: precision_at_1000
value: 0.14875
- type: precision_at_3
value: 16.13716666666667
- type: precision_at_5
value: 11.2815
- type: recall_at_1
value: 25.732166666666668
- type: recall_at_10
value: 50.578916666666665
- type: recall_at_100
value: 72.42183333333334
- type: recall_at_1000
value: 88.48766666666667
- type: recall_at_3
value: 38.41325
- type: recall_at_5
value: 43.515750000000004
- type: map_at_1
value: 23.951
- type: map_at_10
value: 30.974
- type: map_at_100
value: 31.804
- type: map_at_1000
value: 31.900000000000002
- type: map_at_3
value: 28.762
- type: map_at_5
value: 29.94
- type: mrr_at_1
value: 26.534000000000002
- type: mrr_at_10
value: 33.553
- type: mrr_at_100
value: 34.297
- type: mrr_at_1000
value: 34.36
- type: mrr_at_3
value: 31.391000000000002
- type: mrr_at_5
value: 32.525999999999996
- type: ndcg_at_1
value: 26.534000000000002
- type: ndcg_at_10
value: 35.112
- type: ndcg_at_100
value: 39.28
- type: ndcg_at_1000
value: 41.723
- type: ndcg_at_3
value: 30.902
- type: ndcg_at_5
value: 32.759
- type: precision_at_1
value: 26.534000000000002
- type: precision_at_10
value: 5.445
- type: precision_at_100
value: 0.819
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 12.986
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.951
- type: recall_at_10
value: 45.24
- type: recall_at_100
value: 64.12299999999999
- type: recall_at_1000
value: 82.28999999999999
- type: recall_at_3
value: 33.806000000000004
- type: recall_at_5
value: 38.277
- type: map_at_1
value: 16.829
- type: map_at_10
value: 23.684
- type: map_at_100
value: 24.683
- type: map_at_1000
value: 24.81
- type: map_at_3
value: 21.554000000000002
- type: map_at_5
value: 22.768
- type: mrr_at_1
value: 20.096
- type: mrr_at_10
value: 27.230999999999998
- type: mrr_at_100
value: 28.083999999999996
- type: mrr_at_1000
value: 28.166000000000004
- type: mrr_at_3
value: 25.212
- type: mrr_at_5
value: 26.32
- type: ndcg_at_1
value: 20.096
- type: ndcg_at_10
value: 27.989000000000004
- type: ndcg_at_100
value: 32.847
- type: ndcg_at_1000
value: 35.896
- type: ndcg_at_3
value: 24.116
- type: ndcg_at_5
value: 25.964
- type: precision_at_1
value: 20.096
- type: precision_at_10
value: 5
- type: precision_at_100
value: 0.8750000000000001
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 11.207
- type: precision_at_5
value: 8.08
- type: recall_at_1
value: 16.829
- type: recall_at_10
value: 37.407000000000004
- type: recall_at_100
value: 59.101000000000006
- type: recall_at_1000
value: 81.024
- type: recall_at_3
value: 26.739
- type: recall_at_5
value: 31.524
- type: map_at_1
value: 24.138
- type: map_at_10
value: 32.275999999999996
- type: map_at_100
value: 33.416000000000004
- type: map_at_1000
value: 33.527
- type: map_at_3
value: 29.854000000000003
- type: map_at_5
value: 31.096
- type: mrr_at_1
value: 28.450999999999997
- type: mrr_at_10
value: 36.214
- type: mrr_at_100
value: 37.134
- type: mrr_at_1000
value: 37.198
- type: mrr_at_3
value: 34.001999999999995
- type: mrr_at_5
value: 35.187000000000005
- type: ndcg_at_1
value: 28.450999999999997
- type: ndcg_at_10
value: 37.166
- type: ndcg_at_100
value: 42.454
- type: ndcg_at_1000
value: 44.976
- type: ndcg_at_3
value: 32.796
- type: ndcg_at_5
value: 34.631
- type: precision_at_1
value: 28.450999999999997
- type: precision_at_10
value: 6.241
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 14.801
- type: precision_at_5
value: 10.280000000000001
- type: recall_at_1
value: 24.138
- type: recall_at_10
value: 48.111
- type: recall_at_100
value: 71.245
- type: recall_at_1000
value: 88.986
- type: recall_at_3
value: 36.119
- type: recall_at_5
value: 40.846
- type: map_at_1
value: 23.244
- type: map_at_10
value: 31.227
- type: map_at_100
value: 33.007
- type: map_at_1000
value: 33.223
- type: map_at_3
value: 28.924
- type: map_at_5
value: 30.017
- type: mrr_at_1
value: 27.668
- type: mrr_at_10
value: 35.524
- type: mrr_at_100
value: 36.699
- type: mrr_at_1000
value: 36.759
- type: mrr_at_3
value: 33.366
- type: mrr_at_5
value: 34.552
- type: ndcg_at_1
value: 27.668
- type: ndcg_at_10
value: 36.381
- type: ndcg_at_100
value: 43.062
- type: ndcg_at_1000
value: 45.656
- type: ndcg_at_3
value: 32.501999999999995
- type: ndcg_at_5
value: 34.105999999999995
- type: precision_at_1
value: 27.668
- type: precision_at_10
value: 6.798
- type: precision_at_100
value: 1.492
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 15.152
- type: precision_at_5
value: 10.791
- type: recall_at_1
value: 23.244
- type: recall_at_10
value: 45.979
- type: recall_at_100
value: 74.822
- type: recall_at_1000
value: 91.078
- type: recall_at_3
value: 34.925
- type: recall_at_5
value: 39.126
- type: map_at_1
value: 19.945
- type: map_at_10
value: 27.517999999999997
- type: map_at_100
value: 28.588
- type: map_at_1000
value: 28.682000000000002
- type: map_at_3
value: 25.345000000000002
- type: map_at_5
value: 26.555
- type: mrr_at_1
value: 21.996
- type: mrr_at_10
value: 29.845
- type: mrr_at_100
value: 30.775999999999996
- type: mrr_at_1000
value: 30.845
- type: mrr_at_3
value: 27.726
- type: mrr_at_5
value: 28.882
- type: ndcg_at_1
value: 21.996
- type: ndcg_at_10
value: 32.034
- type: ndcg_at_100
value: 37.185
- type: ndcg_at_1000
value: 39.645
- type: ndcg_at_3
value: 27.750999999999998
- type: ndcg_at_5
value: 29.805999999999997
- type: precision_at_1
value: 21.996
- type: precision_at_10
value: 5.065
- type: precision_at_100
value: 0.819
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 12.076
- type: precision_at_5
value: 8.392
- type: recall_at_1
value: 19.945
- type: recall_at_10
value: 43.62
- type: recall_at_100
value: 67.194
- type: recall_at_1000
value: 85.7
- type: recall_at_3
value: 32.15
- type: recall_at_5
value: 37.208999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.279
- type: map_at_10
value: 31.052999999999997
- type: map_at_100
value: 33.125
- type: map_at_1000
value: 33.306000000000004
- type: map_at_3
value: 26.208
- type: map_at_5
value: 28.857
- type: mrr_at_1
value: 42.671
- type: mrr_at_10
value: 54.557
- type: mrr_at_100
value: 55.142
- type: mrr_at_1000
value: 55.169000000000004
- type: mrr_at_3
value: 51.488
- type: mrr_at_5
value: 53.439
- type: ndcg_at_1
value: 42.671
- type: ndcg_at_10
value: 41.276
- type: ndcg_at_100
value: 48.376000000000005
- type: ndcg_at_1000
value: 51.318
- type: ndcg_at_3
value: 35.068
- type: ndcg_at_5
value: 37.242
- type: precision_at_1
value: 42.671
- type: precision_at_10
value: 12.638
- type: precision_at_100
value: 2.045
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 26.08
- type: precision_at_5
value: 19.805
- type: recall_at_1
value: 18.279
- type: recall_at_10
value: 46.946
- type: recall_at_100
value: 70.97200000000001
- type: recall_at_1000
value: 87.107
- type: recall_at_3
value: 31.147999999999996
- type: recall_at_5
value: 38.099
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.573
- type: map_at_10
value: 19.747
- type: map_at_100
value: 28.205000000000002
- type: map_at_1000
value: 29.831000000000003
- type: map_at_3
value: 14.109
- type: map_at_5
value: 16.448999999999998
- type: mrr_at_1
value: 71
- type: mrr_at_10
value: 77.68599999999999
- type: mrr_at_100
value: 77.995
- type: mrr_at_1000
value: 78.00200000000001
- type: mrr_at_3
value: 76.292
- type: mrr_at_5
value: 77.029
- type: ndcg_at_1
value: 59.12500000000001
- type: ndcg_at_10
value: 43.9
- type: ndcg_at_100
value: 47.863
- type: ndcg_at_1000
value: 54.848
- type: ndcg_at_3
value: 49.803999999999995
- type: ndcg_at_5
value: 46.317
- type: precision_at_1
value: 71
- type: precision_at_10
value: 34.4
- type: precision_at_100
value: 11.063
- type: precision_at_1000
value: 1.989
- type: precision_at_3
value: 52.333
- type: precision_at_5
value: 43.7
- type: recall_at_1
value: 8.573
- type: recall_at_10
value: 25.615
- type: recall_at_100
value: 53.385000000000005
- type: recall_at_1000
value: 75.46000000000001
- type: recall_at_3
value: 15.429
- type: recall_at_5
value: 19.357
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.989999999999995
- type: f1
value: 42.776314451497555
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.13499999999999
- type: map_at_10
value: 82.825
- type: map_at_100
value: 83.096
- type: map_at_1000
value: 83.111
- type: map_at_3
value: 81.748
- type: map_at_5
value: 82.446
- type: mrr_at_1
value: 79.553
- type: mrr_at_10
value: 86.654
- type: mrr_at_100
value: 86.774
- type: mrr_at_1000
value: 86.778
- type: mrr_at_3
value: 85.981
- type: mrr_at_5
value: 86.462
- type: ndcg_at_1
value: 79.553
- type: ndcg_at_10
value: 86.345
- type: ndcg_at_100
value: 87.32
- type: ndcg_at_1000
value: 87.58200000000001
- type: ndcg_at_3
value: 84.719
- type: ndcg_at_5
value: 85.677
- type: precision_at_1
value: 79.553
- type: precision_at_10
value: 10.402000000000001
- type: precision_at_100
value: 1.1119999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.413
- type: precision_at_5
value: 20.138
- type: recall_at_1
value: 74.13499999999999
- type: recall_at_10
value: 93.215
- type: recall_at_100
value: 97.083
- type: recall_at_1000
value: 98.732
- type: recall_at_3
value: 88.79
- type: recall_at_5
value: 91.259
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.298000000000002
- type: map_at_10
value: 29.901
- type: map_at_100
value: 31.528
- type: map_at_1000
value: 31.713
- type: map_at_3
value: 25.740000000000002
- type: map_at_5
value: 28.227999999999998
- type: mrr_at_1
value: 36.728
- type: mrr_at_10
value: 45.401
- type: mrr_at_100
value: 46.27
- type: mrr_at_1000
value: 46.315
- type: mrr_at_3
value: 42.978
- type: mrr_at_5
value: 44.29
- type: ndcg_at_1
value: 36.728
- type: ndcg_at_10
value: 37.456
- type: ndcg_at_100
value: 43.832
- type: ndcg_at_1000
value: 47
- type: ndcg_at_3
value: 33.694
- type: ndcg_at_5
value: 35.085
- type: precision_at_1
value: 36.728
- type: precision_at_10
value: 10.386
- type: precision_at_100
value: 1.701
- type: precision_at_1000
value: 0.22599999999999998
- type: precision_at_3
value: 22.479
- type: precision_at_5
value: 16.605
- type: recall_at_1
value: 18.298000000000002
- type: recall_at_10
value: 44.369
- type: recall_at_100
value: 68.098
- type: recall_at_1000
value: 87.21900000000001
- type: recall_at_3
value: 30.215999999999998
- type: recall_at_5
value: 36.861
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.568
- type: map_at_10
value: 65.061
- type: map_at_100
value: 65.896
- type: map_at_1000
value: 65.95100000000001
- type: map_at_3
value: 61.831
- type: map_at_5
value: 63.849000000000004
- type: mrr_at_1
value: 79.136
- type: mrr_at_10
value: 84.58200000000001
- type: mrr_at_100
value: 84.765
- type: mrr_at_1000
value: 84.772
- type: mrr_at_3
value: 83.684
- type: mrr_at_5
value: 84.223
- type: ndcg_at_1
value: 79.136
- type: ndcg_at_10
value: 72.622
- type: ndcg_at_100
value: 75.539
- type: ndcg_at_1000
value: 76.613
- type: ndcg_at_3
value: 68.065
- type: ndcg_at_5
value: 70.58
- type: precision_at_1
value: 79.136
- type: precision_at_10
value: 15.215
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 44.011
- type: precision_at_5
value: 28.388999999999996
- type: recall_at_1
value: 39.568
- type: recall_at_10
value: 76.077
- type: recall_at_100
value: 87.481
- type: recall_at_1000
value: 94.56400000000001
- type: recall_at_3
value: 66.01599999999999
- type: recall_at_5
value: 70.97200000000001
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.312
- type: ap
value: 80.36296867333715
- type: f1
value: 85.26613311552218
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.363999999999997
- type: map_at_10
value: 35.711999999999996
- type: map_at_100
value: 36.876999999999995
- type: map_at_1000
value: 36.923
- type: map_at_3
value: 32.034
- type: map_at_5
value: 34.159
- type: mrr_at_1
value: 24.04
- type: mrr_at_10
value: 36.345
- type: mrr_at_100
value: 37.441
- type: mrr_at_1000
value: 37.480000000000004
- type: mrr_at_3
value: 32.713
- type: mrr_at_5
value: 34.824
- type: ndcg_at_1
value: 24.026
- type: ndcg_at_10
value: 42.531
- type: ndcg_at_100
value: 48.081
- type: ndcg_at_1000
value: 49.213
- type: ndcg_at_3
value: 35.044
- type: ndcg_at_5
value: 38.834
- type: precision_at_1
value: 24.026
- type: precision_at_10
value: 6.622999999999999
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.909
- type: precision_at_5
value: 10.871
- type: recall_at_1
value: 23.363999999999997
- type: recall_at_10
value: 63.426
- type: recall_at_100
value: 88.96300000000001
- type: recall_at_1000
value: 97.637
- type: recall_at_3
value: 43.095
- type: recall_at_5
value: 52.178000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.0095759233926
- type: f1
value: 92.78387794667408
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.0296397628819
- type: f1
value: 58.45699589820874
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.45662407531944
- type: f1
value: 71.42364781421813
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.07800941492937
- type: f1
value: 77.22799045640845
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.531234379250606
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.941490381193802
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.3115090856725
- type: mrr
value: 31.290667638675757
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.465
- type: map_at_10
value: 13.03
- type: map_at_100
value: 16.057
- type: map_at_1000
value: 17.49
- type: map_at_3
value: 9.553
- type: map_at_5
value: 11.204
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 53.269
- type: mrr_at_100
value: 53.72
- type: mrr_at_1000
value: 53.761
- type: mrr_at_3
value: 50.929
- type: mrr_at_5
value: 52.461
- type: ndcg_at_1
value: 42.26
- type: ndcg_at_10
value: 34.673
- type: ndcg_at_100
value: 30.759999999999998
- type: ndcg_at_1000
value: 39.728
- type: ndcg_at_3
value: 40.349000000000004
- type: ndcg_at_5
value: 37.915
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.789
- type: precision_at_100
value: 7.754999999999999
- type: precision_at_1000
value: 2.07
- type: precision_at_3
value: 38.596000000000004
- type: precision_at_5
value: 33.251
- type: recall_at_1
value: 5.465
- type: recall_at_10
value: 17.148
- type: recall_at_100
value: 29.768
- type: recall_at_1000
value: 62.239
- type: recall_at_3
value: 10.577
- type: recall_at_5
value: 13.315
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.008
- type: map_at_10
value: 52.467
- type: map_at_100
value: 53.342999999999996
- type: map_at_1000
value: 53.366
- type: map_at_3
value: 48.412
- type: map_at_5
value: 50.875
- type: mrr_at_1
value: 41.541
- type: mrr_at_10
value: 54.967
- type: mrr_at_100
value: 55.611
- type: mrr_at_1000
value: 55.627
- type: mrr_at_3
value: 51.824999999999996
- type: mrr_at_5
value: 53.763000000000005
- type: ndcg_at_1
value: 41.541
- type: ndcg_at_10
value: 59.724999999999994
- type: ndcg_at_100
value: 63.38700000000001
- type: ndcg_at_1000
value: 63.883
- type: ndcg_at_3
value: 52.331
- type: ndcg_at_5
value: 56.327000000000005
- type: precision_at_1
value: 41.541
- type: precision_at_10
value: 9.447
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.262
- type: precision_at_5
value: 16.314999999999998
- type: recall_at_1
value: 37.008
- type: recall_at_10
value: 79.145
- type: recall_at_100
value: 94.986
- type: recall_at_1000
value: 98.607
- type: recall_at_3
value: 60.277
- type: recall_at_5
value: 69.407
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.402
- type: map_at_10
value: 84.181
- type: map_at_100
value: 84.796
- type: map_at_1000
value: 84.81400000000001
- type: map_at_3
value: 81.209
- type: map_at_5
value: 83.085
- type: mrr_at_1
value: 81.02000000000001
- type: mrr_at_10
value: 87.263
- type: mrr_at_100
value: 87.36
- type: mrr_at_1000
value: 87.36
- type: mrr_at_3
value: 86.235
- type: mrr_at_5
value: 86.945
- type: ndcg_at_1
value: 81.01
- type: ndcg_at_10
value: 87.99900000000001
- type: ndcg_at_100
value: 89.217
- type: ndcg_at_1000
value: 89.33
- type: ndcg_at_3
value: 85.053
- type: ndcg_at_5
value: 86.703
- type: precision_at_1
value: 81.01
- type: precision_at_10
value: 13.336
- type: precision_at_100
value: 1.52
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 24.44
- type: recall_at_1
value: 70.402
- type: recall_at_10
value: 95.214
- type: recall_at_100
value: 99.438
- type: recall_at_1000
value: 99.928
- type: recall_at_3
value: 86.75699999999999
- type: recall_at_5
value: 91.44099999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.51721502758904
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.054808572333016
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.578
- type: map_at_10
value: 11.036999999999999
- type: map_at_100
value: 12.879999999999999
- type: map_at_1000
value: 13.150999999999998
- type: map_at_3
value: 8.133
- type: map_at_5
value: 9.559
- type: mrr_at_1
value: 22.6
- type: mrr_at_10
value: 32.68
- type: mrr_at_100
value: 33.789
- type: mrr_at_1000
value: 33.854
- type: mrr_at_3
value: 29.7
- type: mrr_at_5
value: 31.480000000000004
- type: ndcg_at_1
value: 22.6
- type: ndcg_at_10
value: 18.616
- type: ndcg_at_100
value: 25.883
- type: ndcg_at_1000
value: 30.944
- type: ndcg_at_3
value: 18.136
- type: ndcg_at_5
value: 15.625
- type: precision_at_1
value: 22.6
- type: precision_at_10
value: 9.48
- type: precision_at_100
value: 1.991
- type: precision_at_1000
value: 0.321
- type: precision_at_3
value: 16.8
- type: precision_at_5
value: 13.54
- type: recall_at_1
value: 4.578
- type: recall_at_10
value: 19.213
- type: recall_at_100
value: 40.397
- type: recall_at_1000
value: 65.2
- type: recall_at_3
value: 10.208
- type: recall_at_5
value: 13.718
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.44288351714071
- type: cos_sim_spearman
value: 79.37995604564952
- type: euclidean_pearson
value: 81.1078874670718
- type: euclidean_spearman
value: 79.37995905980499
- type: manhattan_pearson
value: 81.03697527288986
- type: manhattan_spearman
value: 79.33490235296236
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.95557650436523
- type: cos_sim_spearman
value: 78.5190672399868
- type: euclidean_pearson
value: 81.58064025904707
- type: euclidean_spearman
value: 78.5190672399868
- type: manhattan_pearson
value: 81.52857930619889
- type: manhattan_spearman
value: 78.50421361308034
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.79128416228737
- type: cos_sim_spearman
value: 86.05402451477147
- type: euclidean_pearson
value: 85.46280267054289
- type: euclidean_spearman
value: 86.05402451477147
- type: manhattan_pearson
value: 85.46278563858236
- type: manhattan_spearman
value: 86.08079590861004
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.20623089568763
- type: cos_sim_spearman
value: 81.53786907061009
- type: euclidean_pearson
value: 82.82272250091494
- type: euclidean_spearman
value: 81.53786907061009
- type: manhattan_pearson
value: 82.78850494027013
- type: manhattan_spearman
value: 81.5135618083407
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 85.46366618397936
- type: cos_sim_spearman
value: 86.96566013336908
- type: euclidean_pearson
value: 86.62651697548931
- type: euclidean_spearman
value: 86.96565526364454
- type: manhattan_pearson
value: 86.58812160258009
- type: manhattan_spearman
value: 86.9336484321288
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.51858358641559
- type: cos_sim_spearman
value: 84.7652527954999
- type: euclidean_pearson
value: 84.23914783766861
- type: euclidean_spearman
value: 84.7652527954999
- type: manhattan_pearson
value: 84.22749648503171
- type: manhattan_spearman
value: 84.74527996746386
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.28026563313065
- type: cos_sim_spearman
value: 87.46928143824915
- type: euclidean_pearson
value: 88.30558762000372
- type: euclidean_spearman
value: 87.46928143824915
- type: manhattan_pearson
value: 88.10513330809331
- type: manhattan_spearman
value: 87.21069787834173
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.376497134587375
- type: cos_sim_spearman
value: 65.0159550112516
- type: euclidean_pearson
value: 65.64572120879598
- type: euclidean_spearman
value: 65.0159550112516
- type: manhattan_pearson
value: 65.88143604989976
- type: manhattan_spearman
value: 65.17547297222434
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.22876368947644
- type: cos_sim_spearman
value: 85.46935577445318
- type: euclidean_pearson
value: 85.32830231392005
- type: euclidean_spearman
value: 85.46935577445318
- type: manhattan_pearson
value: 85.30353211758495
- type: manhattan_spearman
value: 85.42821085956945
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.60986667767133
- type: mrr
value: 94.29432314236236
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.528
- type: map_at_10
value: 65.187
- type: map_at_100
value: 65.62599999999999
- type: map_at_1000
value: 65.657
- type: map_at_3
value: 62.352
- type: map_at_5
value: 64.025
- type: mrr_at_1
value: 57.333
- type: mrr_at_10
value: 66.577
- type: mrr_at_100
value: 66.88
- type: mrr_at_1000
value: 66.908
- type: mrr_at_3
value: 64.556
- type: mrr_at_5
value: 65.739
- type: ndcg_at_1
value: 57.333
- type: ndcg_at_10
value: 70.275
- type: ndcg_at_100
value: 72.136
- type: ndcg_at_1000
value: 72.963
- type: ndcg_at_3
value: 65.414
- type: ndcg_at_5
value: 67.831
- type: precision_at_1
value: 57.333
- type: precision_at_10
value: 9.5
- type: precision_at_100
value: 1.057
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.778000000000002
- type: precision_at_5
value: 17.2
- type: recall_at_1
value: 54.528
- type: recall_at_10
value: 84.356
- type: recall_at_100
value: 92.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 71.283
- type: recall_at_5
value: 77.14999999999999
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74158415841585
- type: cos_sim_ap
value: 92.90048959850317
- type: cos_sim_f1
value: 86.35650810245687
- type: cos_sim_precision
value: 90.4709748083242
- type: cos_sim_recall
value: 82.6
- type: dot_accuracy
value: 99.74158415841585
- type: dot_ap
value: 92.90048959850317
- type: dot_f1
value: 86.35650810245687
- type: dot_precision
value: 90.4709748083242
- type: dot_recall
value: 82.6
- type: euclidean_accuracy
value: 99.74158415841585
- type: euclidean_ap
value: 92.90048959850317
- type: euclidean_f1
value: 86.35650810245687
- type: euclidean_precision
value: 90.4709748083242
- type: euclidean_recall
value: 82.6
- type: manhattan_accuracy
value: 99.74158415841585
- type: manhattan_ap
value: 92.87344692947894
- type: manhattan_f1
value: 86.38497652582159
- type: manhattan_precision
value: 90.29443838604145
- type: manhattan_recall
value: 82.8
- type: max_accuracy
value: 99.74158415841585
- type: max_ap
value: 92.90048959850317
- type: max_f1
value: 86.38497652582159
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 63.191648770424216
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.02944668730218
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.466386167525265
- type: mrr
value: 51.19071492233257
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.198022505886435
- type: cos_sim_spearman
value: 30.40170257939193
- type: dot_pearson
value: 30.198015316402614
- type: dot_spearman
value: 30.40170257939193
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.242
- type: map_at_10
value: 2.17
- type: map_at_100
value: 12.221
- type: map_at_1000
value: 28.63
- type: map_at_3
value: 0.728
- type: map_at_5
value: 1.185
- type: mrr_at_1
value: 94
- type: mrr_at_10
value: 97
- type: mrr_at_100
value: 97
- type: mrr_at_1000
value: 97
- type: mrr_at_3
value: 97
- type: mrr_at_5
value: 97
- type: ndcg_at_1
value: 89
- type: ndcg_at_10
value: 82.30499999999999
- type: ndcg_at_100
value: 61.839999999999996
- type: ndcg_at_1000
value: 53.381
- type: ndcg_at_3
value: 88.877
- type: ndcg_at_5
value: 86.05199999999999
- type: precision_at_1
value: 94
- type: precision_at_10
value: 87
- type: precision_at_100
value: 63.38
- type: precision_at_1000
value: 23.498
- type: precision_at_3
value: 94
- type: precision_at_5
value: 92
- type: recall_at_1
value: 0.242
- type: recall_at_10
value: 2.302
- type: recall_at_100
value: 14.979000000000001
- type: recall_at_1000
value: 49.638
- type: recall_at_3
value: 0.753
- type: recall_at_5
value: 1.226
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.006
- type: map_at_10
value: 11.805
- type: map_at_100
value: 18.146
- type: map_at_1000
value: 19.788
- type: map_at_3
value: 5.914
- type: map_at_5
value: 8.801
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 56.36600000000001
- type: mrr_at_100
value: 56.721999999999994
- type: mrr_at_1000
value: 56.721999999999994
- type: mrr_at_3
value: 52.041000000000004
- type: mrr_at_5
value: 54.796
- type: ndcg_at_1
value: 37.755
- type: ndcg_at_10
value: 29.863
- type: ndcg_at_100
value: 39.571
- type: ndcg_at_1000
value: 51.385999999999996
- type: ndcg_at_3
value: 32.578
- type: ndcg_at_5
value: 32.351
- type: precision_at_1
value: 40.816
- type: precision_at_10
value: 26.531
- type: precision_at_100
value: 7.796
- type: precision_at_1000
value: 1.555
- type: precision_at_3
value: 32.653
- type: precision_at_5
value: 33.061
- type: recall_at_1
value: 3.006
- type: recall_at_10
value: 18.738
- type: recall_at_100
value: 48.058
- type: recall_at_1000
value: 83.41300000000001
- type: recall_at_3
value: 7.166
- type: recall_at_5
value: 12.102
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.4178
- type: ap
value: 14.648781342150446
- type: f1
value: 55.07299194946378
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.919637804187886
- type: f1
value: 61.24122013967399
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.207896583685695
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.23114978840078
- type: cos_sim_ap
value: 74.26624727825818
- type: cos_sim_f1
value: 68.72377190817083
- type: cos_sim_precision
value: 64.56400742115028
- type: cos_sim_recall
value: 73.45646437994723
- type: dot_accuracy
value: 86.23114978840078
- type: dot_ap
value: 74.26624032659652
- type: dot_f1
value: 68.72377190817083
- type: dot_precision
value: 64.56400742115028
- type: dot_recall
value: 73.45646437994723
- type: euclidean_accuracy
value: 86.23114978840078
- type: euclidean_ap
value: 74.26624714480556
- type: euclidean_f1
value: 68.72377190817083
- type: euclidean_precision
value: 64.56400742115028
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.16558383501221
- type: manhattan_ap
value: 74.2091943976357
- type: manhattan_f1
value: 68.64221520524654
- type: manhattan_precision
value: 63.59135913591359
- type: manhattan_recall
value: 74.5646437994723
- type: max_accuracy
value: 86.23114978840078
- type: max_ap
value: 74.26624727825818
- type: max_f1
value: 68.72377190817083
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.3681841114604
- type: cos_sim_ap
value: 86.65166387498546
- type: cos_sim_f1
value: 79.02581944698774
- type: cos_sim_precision
value: 75.35796605434099
- type: cos_sim_recall
value: 83.06898675700647
- type: dot_accuracy
value: 89.3681841114604
- type: dot_ap
value: 86.65166019802056
- type: dot_f1
value: 79.02581944698774
- type: dot_precision
value: 75.35796605434099
- type: dot_recall
value: 83.06898675700647
- type: euclidean_accuracy
value: 89.3681841114604
- type: euclidean_ap
value: 86.65166462876266
- type: euclidean_f1
value: 79.02581944698774
- type: euclidean_precision
value: 75.35796605434099
- type: euclidean_recall
value: 83.06898675700647
- type: manhattan_accuracy
value: 89.36624364497226
- type: manhattan_ap
value: 86.65076471274106
- type: manhattan_f1
value: 79.07408783532733
- type: manhattan_precision
value: 76.41102972856527
- type: manhattan_recall
value: 81.92947336002464
- type: max_accuracy
value: 89.3681841114604
- type: max_ap
value: 86.65166462876266
- type: max_f1
value: 79.07408783532733
---
# Tazza991/nomic-embed-text-v1.5-Q5_K_M-GGUF
This model was converted to GGUF format from [`nomic-ai/nomic-embed-text-v1.5`](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Tazza991/nomic-embed-text-v1.5-Q5_K_M-GGUF --hf-file nomic-embed-text-v1.5-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Tazza991/nomic-embed-text-v1.5-Q5_K_M-GGUF --hf-file nomic-embed-text-v1.5-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Tazza991/nomic-embed-text-v1.5-Q5_K_M-GGUF --hf-file nomic-embed-text-v1.5-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Tazza991/nomic-embed-text-v1.5-Q5_K_M-GGUF --hf-file nomic-embed-text-v1.5-q5_k_m.gguf -c 2048
```
|
MinaMila/llama_instbase_unlearned_Adult_10ep_22 | MinaMila | 2025-04-02T06:56:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T06:53:01Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
merelevy/environmental-accessibility | merelevy | 2025-04-02T06:56:02Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] | image-classification | 2025-04-02T06:55:50Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: environmental-accessibility
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8202247023582458
---
# environmental-accessibility
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### building ramp

#### room signs

#### sign with braille

#### stairs
 |
mradermacher/DistressAI-GGUF | mradermacher | 2025-04-02T06:55:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"DistressAI",
"trl",
"sft",
"en",
"base_model:NguyenDuyPhuc/DistressAI",
"base_model:quantized:NguyenDuyPhuc/DistressAI",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T05:02:38Z | ---
base_model: NguyenDuyPhuc/DistressAI
language:
- en
library_name: transformers
model_name: DistressAI
quantized_by: mradermacher
tags:
- generated_from_trainer
- DistressAI
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NguyenDuyPhuc/DistressAI
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DistressAI-GGUF/resolve/main/DistressAI.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/openhands-lm-7b-v0.1-GGUF | mradermacher | 2025-04-02T06:55:07Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"agent",
"coding",
"en",
"dataset:SWE-Gym/SWE-Gym",
"base_model:all-hands/openhands-lm-7b-v0.1",
"base_model:quantized:all-hands/openhands-lm-7b-v0.1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T03:58:19Z | ---
base_model: all-hands/openhands-lm-7b-v0.1
datasets:
- SWE-Gym/SWE-Gym
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- agent
- coding
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/all-hands/openhands-lm-7b-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-7b-v0.1-GGUF/resolve/main/openhands-lm-7b-v0.1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
cocovani/videomae-base-finetuned-sdfvd_plus_alpha | cocovani | 2025-04-02T06:53:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-04-02T04:49:46Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-sdfvd_plus_alpha
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-sdfvd_plus_alpha
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6382
- Accuracy: 0.6115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 68
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.7686 | 0.2647 | 18 | 0.6327 | 0.7742 |
| 0.6917 | 1.2647 | 36 | 0.5886 | 0.7419 |
| 0.6434 | 2.2647 | 54 | 0.5320 | 0.8 |
| 0.6342 | 3.2059 | 68 | 0.5144 | 0.8194 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1
|
John6666/hana-v10-sdxl | John6666 | 2025-04-02T06:52:55Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"hentai",
"style",
"clean lines",
"vibrant colors",
"impressive details",
"haru",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-04-02T06:43:56Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- hentai
- style
- clean lines
- vibrant colors
- impressive details
- haru
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1423365?modelVersionId=1608794).
This model created by [MotherGoddess](https://civitai.com/user/MotherGoddess).
|
monikasengar/animal_image_classification | monikasengar | 2025-04-02T06:51:59Z | 0 | 0 | null | [
"image-classification",
"en",
"dataset:monikasengar/animal_image_classification",
"region:us"
] | image-classification | 2025-04-01T16:40:58Z | ---
datasets:
- monikasengar/animal_image_classification
language:
- en
pipeline_tag: image-classification
--- |
artisanalwasp/resized_tool_dataset_model_batchsize2 | artisanalwasp | 2025-04-02T06:48:14Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-04-02T06:26:38Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - artisanalwasp/resized_tool_dataset_model_batchsize2
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the artisanalwasp/resized_tool_dataset dataset. You can find some example images in the following.



LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
HelenShao04/sd-class-butterflies-32 | HelenShao04 | 2025-04-02T06:45:58Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2025-04-02T06:45:31Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('HelenShao04/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
icycyborg/bella-lora | icycyborg | 2025-04-02T06:44:19Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-02T06:07:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
PrunaAI/meta-llama-Llama-2-7b-hf-GGUF-smashed | PrunaAI | 2025-04-02T06:41:45Z | 0 | 0 | null | [
"gguf",
"pruna-ai",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:quantized:meta-llama/Llama-2-7b-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-03-18T01:51:39Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: meta-llama/Llama-2-7b-hf
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the meta-llama/Llama-2-7b-hf model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: meta-llama-Llama-2-7b-hf-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download meta-llama-Llama-2-7b-hf-GGUF-smashed Llama-2-7b-hf.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download meta-llama-Llama-2-7b-hf-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download meta-llama-Llama-2-7b-hf-GGUF-smashed Llama-2-7b-hf.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Llama-2-7b-hf.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Llama-2-7b-hf.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {{prompt}} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Llama-2-7b-hf.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{{"role": "system", "content": "You are a story writing assistant."}},
{{
"role": "user",
"content": "Write a story about llamas."
}}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
iykee/45DVVBB | iykee | 2025-04-02T06:41:13Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-02T06:41:13Z | ---
license: apache-2.0
---
|
Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF | Jojobigworld | 2025-04-02T06:41:01Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:XGenerationLab/XiYanSQL-QwenCoder-7B-2502",
"base_model:quantized:XGenerationLab/XiYanSQL-QwenCoder-7B-2502",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T06:40:33Z | ---
base_model: XGenerationLab/XiYanSQL-QwenCoder-7B-2502
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF
This model was converted to GGUF format from [`XGenerationLab/XiYanSQL-QwenCoder-7B-2502`](https://huggingface.co/XGenerationLab/XiYanSQL-QwenCoder-7B-2502) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/XGenerationLab/XiYanSQL-QwenCoder-7B-2502) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jojobigworld/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -c 2048
```
|
shubhamprshr/Qwen2.5-3B-Instruct_blocksworld2_grpo_False | shubhamprshr | 2025-04-02T06:40:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:blocksworld-dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T03:44:37Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: blocksworld-dataset
library_name: transformers
model_name: Qwen2.5-3B-Instruct_blocksworld2_grpo_False
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-3B-Instruct_blocksworld2_grpo_False
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Qwen2.5-3B-Instruct_blocksworld2_grpo_False", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/BW/runs/dfqvobom)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
daishen/openfin-1.5B-ZH-optimal-sft_lll | daishen | 2025-04-02T06:39:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T05:09:32Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gavrilstep/aifac | gavrilstep | 2025-04-02T06:38:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T06:33:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahlad/nllb-600M-finetune-en-kha | ahlad | 2025-04-02T06:37:33Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"khasi",
"translation",
"en",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2025-01-16T05:52:38Z | ---
library_name: transformers
tags:
- khasi
- translation
language:
- en
base_model:
- facebook/nllb-200-distilled-600M
pipeline_tag: translation
---
# NLLB 600M for Khasi
## Usage
```py
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "ahlad/nllb-600M-finetune-en-kha"
tokenizer = AutoTokenizer.from_pretrained(model_name, src_lang="vie_Latn")
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
article = "Kata ka dei ka bos ."
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.convert_tokens_to_ids("eng_Latn"), max_length=30
)
print(tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0])
```
## Pipeline
This is the preferred method for translating a large number of sentences when used in conjunction with a Hugging Face Dataset.
```py
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
import torch
model_name = "ahlad/nllb-600M-finetune-en-kha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
translator_nllb = pipeline(
"translation",
model=model,
tokenizer=tokenizer,
src_lang="vie_Latn",
tgt_lang="eng_Latn",
max_length=128,
device=0 if torch.cuda.is_available() else -1,
)
``` |
mradermacher/NarrowMaid-8B-GGUF | mradermacher | 2025-04-02T06:36:52Z | 5 | 1 | transformers | [
"transformers",
"gguf",
"rp",
"roleplay",
"roleplaying",
"storywriting",
"creative",
"merge",
"mergekit",
"en",
"base_model:Hamzah-Asadullah/NarrowMaid-8B",
"base_model:quantized:Hamzah-Asadullah/NarrowMaid-8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T04:05:38Z | ---
base_model: Hamzah-Asadullah/NarrowMaid-8B
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- rp
- roleplay
- roleplaying
- storywriting
- creative
- merge
- mergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Hamzah-Asadullah/NarrowMaid-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NarrowMaid-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NarrowMaid-8B-GGUF/resolve/main/NarrowMaid-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
moyixiao/qwen15_0402_4096_32 | moyixiao | 2025-04-02T06:36:13Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T06:35:06Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/cyberrealistic-xl-v53-sdxl | John6666 | 2025-04-02T06:35:41Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"en",
"base_model:cyberdelia/CyberRealisticXL",
"base_model:finetune:cyberdelia/CyberRealisticXL",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-04-02T06:27:06Z | ---
license: cc0-1.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
base_model: cyberdelia/CyberRealisticXL
---
Original model is [here](https://huggingface.co/cyberdelia/CyberRealisticXL) and on [Civitai](https://civitai.com/models/312530/cyberrealistic-xl?modelVersionId=1609607).
The author is [here](https://huggingface.co/cyberdelia).
This model created by [Cyberdelia](https://civitai.com/user/Cyberdelia).
|
MinaMila/llama_instbase_unlearned_Adult_8ep_22 | MinaMila | 2025-04-02T06:35:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T06:32:16Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jonjew/submergedFlux | Jonjew | 2025-04-02T06:34:55Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2025-04-02T06:34:48Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
submergedaf. A hyperrealistic close-up portrait of a young woman partially
submerged in water, her freckled face illuminated by cinematic lighting. Her
eyes are open, expressive, and reflective, framed by long wet lashes. The
water ripples softly around her face, catching golden-orange and teal-blue
lighting from above and below. Her skin glistens with droplets, showing fine
pores and natural texture. Beneath the surface, intricate water caustics
dance across her neck and shoulders, casting shifting light patterns that
shimmer like liquid lace. The lighting is soft yet dramatic, blending warm
highlights with cool shadows to create an ethereal, dreamlike atmosphere.
Her expression is calm, introspective, and vulnerable. The overall tone is
emotionally rich, painterly, and intimate. Evoking a suspended moment
between breath and thought.
parameters:
negative_prompt: 'Guidance: 1 Steps: 30 Seed: 650600217932757'
output:
url: images/Face in Water.png
- text: >-
submergedaf. A realistic cinematic portrait of a woman completely submerged
just beneath the surface of dark green water, her face softly illuminated by
shimmering water caustics. Her eyes are open and looking at the camera, lips
gently parted, and expression serene, as if lost in a deep dream. Rippling
light patterns dance across her skin, casting intricate, organic reflections
and highlights on her cheeks, forehead, and neck. Her hair floats freely
around her, blending into the deep green shadows of the surrounding water.
The lighting is soft and natural, evoking a sense of quiet stillness and
suspended time. The water is clear but tinted with rich green hues, creating
an otherworldly atmosphere. Emphasize detailed skin texture, the interplay
of light and liquid distortion, and the softness of the scene. The mood is
introspective, peaceful, and ethereal—like a quiet moment of transformation
or rebirth within an aquatic realm.
parameters:
negative_prompt: 'Guidance: 4 Steps: 30 Seed: 218286689747307'
output:
url: images/Face under water.png
- text: >-
submergedaf. A realistic, ethereal portrait of a young woman fully submerged
just beneath the surface of still water, surrounded by pale green eucalyptus
leaves. Her eyes are open looking at the camera, lips together with a gentle
smile in a soft, peaceful expression. Lighting and water caustics play
delicately across her dewy skin, highlighting her natural texture, flushed
cheeks, and coral-pink lips. Soft strands of wet hair frame her face,
drifting gracefully in the water. The surface gently ripples around her,
forming small circular waves that reflect the muted, natural lighting. The
eucalyptus leaves float around her like a delicate halo, enhancing the sense
of calm and purity. The image is shot from directly above, emphasizing
symmetry and intimacy. Color grading features soft teals, sage greens, and
warm skin tones, evoking a sense of organic tranquility and timeless beauty.
The mood is poetic, natural, and deeply peaceful—like a living painting
suspended in time. Focus on fine skin detail, botanical elements, gentle
water distortions, and cinematic soft lighting.
parameters:
negative_prompt: 'Guidance: 4 Steps: 30 Seed: 19850920'
output:
url: images/Face with plants.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: submergedaf
license: unknown
---
# submerged - Flux
<Gallery />
## Model description
FROM https://civitai.com/models/1424932/submerged-flux1?modelVersionId=1610625
Support the creator by liking and donating buzz at the page above
Trigger submergedaf
Strength 0.8
Concept LoRa of models floating, either partially or fully submerged in water, closeup on the face, highly detailed, accurate water caustics, with beauty and depth.
trigger: submergedaf
## Trigger words
You should use `submergedaf` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/submergedFlux/tree/main) them in the Files & versions tab.
|
sujayrittikar/adni_llava_qlora | sujayrittikar | 2025-04-02T06:34:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"alzheimers",
"llava",
"en",
"base_model:llava-hf/llava-1.5-7b-hf",
"base_model:adapter:llava-hf/llava-1.5-7b-hf",
"license:apache-2.0",
"region:us"
] | null | 2025-04-02T06:26:33Z | ---
base_model: llava-hf/llava-1.5-7b-hf
library_name: peft
license: apache-2.0
language:
- en
tags:
- alzheimers
- llava
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is developed to classify Alzheimer's Disease, fine-tuned using QLoRA on ADNI dataset. |
Tazza991/DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_M-GGUF | Tazza991 | 2025-04-02T06:34:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T06:34:09Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
license: mit
tags:
- llama-cpp
- gguf-my-repo
---
# Tazza991/DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Tazza991/DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Tazza991/DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Tazza991/DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Tazza991/DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q5_k_m.gguf -c 2048
```
|
zjudai/flowertune-general-nlp-lora-llama-3.2-1b-instruct | zjudai | 2025-04-02T06:30:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"federated-learning",
"flower",
"dataset:vicgalle/alpaca-gpt4",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-04-02T06:11:16Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
---
# FlowerTune LoRA Model
This is a LoRA adapter for meta-llama/Llama-3.2-1B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.
## Training Details
- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower
This model is a LoRA adapter fine-tuned on meta-llama/Llama-3.2-1B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
## Links
- FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
- FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)
|
zjudai/flowertune-general-nlp-lora-mistral-7b-instruct-v0.3 | zjudai | 2025-04-02T06:30:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"federated-learning",
"flower",
"dataset:vicgalle/alpaca-gpt4",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"region:us"
] | null | 2025-04-02T06:10:35Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
---
# FlowerTune LoRA Model
This is a LoRA adapter for mistralai/Mistral-7B-Instruct-v0.3 fine-tuned with Flower federated learning framework on a general NLP dataset.
## Training Details
- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower
This model is a LoRA adapter fine-tuned on mistralai/Mistral-7B-Instruct-v0.3 using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
## Links
- FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
- FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)
|
KaraKaraWitch/Llama-3.3-CURSEDMAGICALGIRL-2 | KaraKaraWitch | 2025-04-02T06:30:35Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Black-Ink-Guild/Pernicious_Prophecy_70B",
"base_model:merge:Black-Ink-Guild/Pernicious_Prophecy_70B",
"base_model:KaraKaraWitch/Llama-3.X-Workout-70B",
"base_model:merge:KaraKaraWitch/Llama-3.X-Workout-70B",
"base_model:KaraKaraWitch/Llama-MiraiFanfare-3.3-70B",
"base_model:merge:KaraKaraWitch/Llama-MiraiFanfare-3.3-70B",
"base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:merge:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:allenai/Llama-3.1-Tulu-3-70B",
"base_model:merge:allenai/Llama-3.1-Tulu-3-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-01T06:52:40Z | ---
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/8PvySznKDLTTSJyptSMOh.png
base_model:
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- KaraKaraWitch/Llama-3.X-Workout-70B
- KaraKaraWitch/Llama-MiraiFanfare-3.3-70B
- allenai/Llama-3.1-Tulu-3-70B
- Black-Ink-Guild/Pernicious_Prophecy_70B
- ReadyArt/Forgotten-Safeword-70B-v5.0
library_name: transformers
tags:
- mergekit
- merge
---
<style>
div,p,h1,h2,h3 {
font-family: monospace;
}
</style>
<div class="prose hf-sanitized hf-sanitized-S5eaLo-MNpns7l30p5D34"> <p>Hi-- w̸a̵i̴t̴.̷.̶.̴ ̶ ̸͇̮̃́̇͂̀̔w̷̬̗̋͠h̴͎̯̲̦̳̹͌å̸̗̜͓̯̂ṯ̷̢̺̣͛̂̉͋͐̚'̶̡̠̞́̅̀ṡ̶̨̻̘ ̷̘́̆͝ ḩ̴̨̧̧̧̠̳̰̖̰̼͙̥̱̖̠͔͇̟̩̯̜͈͈̹̯̑̏͜ą̸̢̢̻͉̻̘͙͍̘͕̣̟̹͖̥̜͍͔̻̺̗̬̬̐̐̒̍̈́̅͆͂̒̏̕͜͠͝ͅͅp̶̢̛̺̰̫͙̥̞̦͍͗̾̎̀́̉͑́̔̃̾̓̐̑͌͑͛̂͘͠͝͠p̴̧̢̭̠͓̟͚̳̞̺͍̹̞̦͙̪͙͇̥̯͎̈̆́̓̅͜ͅe̷̢̢̪̘̻̥̭̞̟̙̰̟̹̜̮̻̼̾̔͋̑̃̒̃̂͊͋͗̍̈́̂̍̕̕͘n̷̳͎̤͈̗̼̪̼̦̠̤͉̭̬͆̀̎̈́̓͂ͅį̴̛̞͖͕̫̮̫͚͑̍̌͛̑̐̌̌́͘͠ṇ̶͕̈̆̋̍̔̋̀͊͘g̶̢̨̧̛̠̗̫̻͙͈̱̰̣̹͍̪͔̗̦͇͈͊̓̿͆̆̌̊̒͑͛͑̓̓̽̑͂́͜͝͠͝͝?̷̘̱͙̮͈̗͉̰̱̖͔̹̘̬̯̏̍͊̒̈́̇̓̂̍͋̏͘͜͝ͅͅ!̷̨͍͙̻͒̚</p>
<br>
<br>
<br>
<h1 class="relative group flex items-center">
<a rel="nofollow" href="#system-corruption-detected-entering-safe-mode" class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" id="system-corruption-detected-entering-safe-mode">
<span class="header-link"><svg viewBox="0 0 256 256" preserveAspectRatio="xMidYMid meet" height="1em" width="1em" role="img" aria-hidden="true" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4"><path fill="currentColor" d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z"></path></svg></span>
</a>
<span>
<strong>SYSTEM CORRUPTION DETECTED, ENTERING SAFE MODE</strong>
</span>
</h1>
<div style="text-align:center;"><a rel="nofollow" href="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/8PvySznKDLTTSJyptSMOh.png"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/8PvySznKDLTTSJyptSMOh.png"></a></div>
<p><br><strong>TEMPLE-OS v69.333 FAILSAFE ENABLED</strong>
<br>...
<br>..
<br>.</p>
<p>We apologize for the inconvenience, but the model creator has decided not to provide a model card description due to the f̷̥̭̺̥̖͔̯̰̙͎͈̟̈̈͊͛̓́̈́̆͛̈͜ǒ̷͈̯̤̳͙̙̪̈́́͛̔͂̀͊͛l̴̢̦̫͇̠͈̼̻̖̻̩̙̫͋͑͋̑͊̅̐̾̈͛̕͘̚l̵̨̘̻͚͚͌̎̿͘̚o̷̡̻͙̦͈̹̲̙̩̖͔͙̪̖̍̏̔̾̓̽̎͋̚͘͝w̵̦̙̟̚i̷̲͙͚̱̲̳̱̣͙̓̅̄͛̂́̒̈́̑̋̏́͊͜͠ͅn̸̡̹̪͎̪̱̦̜̠̭̞͈̊̓̔̓̀͛͊̅̀̉̇͂̏̃͝g̷̬̞̱͙͖͖̞̰̃̋̂̈̈́̓͛̋̀̕͠ ê̵̛͕̎͒̀͊̏͊͋̐̈̆͆͗̾́̕̕͝͠͝͝ŗ̷̛̖̮̟̳̲̦̬͖̹̙̞͇̟̥͙̱̞̫̲̠͉̬̞̽̃͑͗̓̅̾̊̂͊̊̄̈́͑̓͌͂̈́͊̕͝r̸̡͈͖̻͈̮̩̞͊́̊̔̓̐̅o̶̙͙͕̦͈̅͑̀̚r̶̢̢̨̛̞̟̘̭̗̱̼̟̘̩̩̹̞͓͚͔̟̖̭͜ ̸̨̨̛͇̗͙̠͍̤͙̤̰̗̝̎̔̍͋̏͐̽̈́̏̍́̓́̈́́͋͒͗̅̄̄̄̆͛̄͜͝ͅc̸̛̦͈̘̲͔͉͉̼͙͉̲̩̘͋̇ō̶̡̨̥̮̜͈͈͉̱͓̼̘̻̓̿̀̈́̋̈͠d̶̡̺͓̳͍̘̹̜̫̝̱̭͉͌̾͐͂ȩ̷̡̛̩͎͓͈̗̞͖̼̗̬͔̱͖̥̘͇͈̻̣͔̞̹͐́̋͛̔̒̂̓̀̄͛̋̏́̐͘̚͘̕͠ͅ: </p>
<p><code>CURSED-MAGICALGIRLS-2</code></p>
<p>I̵f̶ ̵y̸o̷u̷ ̵chose to accept the w̷a̵r̴n̸i̷n̸g̸, you may s̶̼̊̓̇͑̅̐̓͝͝e̴͓̣̰̅̊̑̎́̀̍̈́́̓͗͘͝͠l̴̢̙͙͎͕̪̎͐̚e̸̡̨̨̙̰͖̺̭̞͎̳̻̫͂͜c̷̢̢͖̗̩͉̣̲̈̓̀̚͠t̷̡͓̭̥͍͎̘͙̘͍̔</p>
<p≯̛̛̤̮̇̃͂̌ ̵̛̥̣͎̹͈͑̏̂̓̍̊̉́͊͘>̶̬̭̪̻̔̀͊́̏̚͜Ȉ̶̛̤͑̐̽̔́͐̀̈̿̓̿̽̾̾g̸͓͓̲͎̤̟̰̞̯̰͒̄̎̃͌̎͌̋̆̔͊̕̕͜͜͝͝n̴͔̼̻̤̻̠̟̥̔͝o̸̡̮̙̓̒̃̐̈̿̚͝ŗ̶̧̠̱͇̟̱͐̍̓ę̵̛͉̞̌͆̓͐̿̃͒͌́̄̌̈̏̋͛,̴̡͍̜̲͉̯̭̫͈̙̭̹̥̠͉̀ ̵̯͚̋͐̿̈́̈́̀͆́̏͘a̶̡̢͈̻̖̥̮̼̐̍̍͗́̒͌́͆̍̏̐̑̚͝n̴̡͍͓̝̉͛̀̑̎͐̽̀̏̐̆̐͑͆̏̃͜ḋ̴̢͕̹̯͎͉͖̼͈̰̒̓͌̉̄̍͌̌̃̿̎͊͘͠͝ ̸̙͖̥̱͖͖͊̎͒̂̓͂̄̈̈́͐͜͝l̶̝͛̌͂́̂̏́͂͋̏̌͗̚ȏ̷̡̬͙͚̥͌̃͒͋̈́̐́̽͘͠ͅͅả̸͓͇̔͗͗͒̃͌̔͆̒̕͠d̸̨̟̠̂̐͝͝ ̸̟̠̦̭͕̫̘̯̖̫͔̺͉͖̈́̈́̅͛t̶̛̛̛̖̻̼̰͈̗͛̒͂͂̐̊͛͑̃̉̉̐͝ͅh̵̨͎͉̙̤̥̯̞͉̙͛͛͜é̵̛̬̳̟̹͉̝̥̓̅̃̄͂͗̿̋̈̉͒̓̄͠͝ ̵̺̣̖̲͎̥̠̙̜͈͍͍̗̤̖͝ͅm̸̧̧̤̤̜̱̳̤̃́́̋̾ͅǒ̸̢̥͖̪͎͕̙͍̊̀͊̀̾̄̓̉̈́͑̓̂͋̉̈ͅd̵̨̮͚̱̤͓͎͚̣͉̻̹̠͔͊̐͊̚ͅe̴͉̺̗̝̥̰͚̮͂̈́̄̐̊̈̐̌̕̕ļ̶̡͕̩͇̮̩̪̺̞͉̾ ̷̧̪̼̗͇̪̣͔̰̜͊̈́̓̔̒͜ǹ̴̳̺̜̱̙̞͉̼͗͌̈́͠ơ̷͕̮̟͋͑͐͐̊̽r̷̨̨̹̞͓̠̰̱̝̠͙̜̖̖͉̓̈́̍̉̅͜͠m̶̨̳̝̠͕̮̬̱̎̋ạ̶̧̗̋̈́̾͂̓̈́̉̌̌̈́̚ͅl̸̨̰̮̮̠̹̝͂̈́̏͐̆͆͒̎̾͒̾̎͂̓͠l̷̦̜͒̋́̎͗͒͠͝ỹ̴͍̤̱̙̫̱̞̰͌̑͐̓̃̋̽̄̀͑̚͝͝͝≮̢̟͉̲̠̼̠̳̣̫̻͉̻̱̹̈́̒̀̎̎̃̾̇</p>
<p>> <a rel="nofollow" href="https://huggingface.co/KaraKaraWitch/">̷͕̲̬͗̒Ạ̶͉͇͕̋̓̽͜l̵̹̽́̕t̷̢̪͕̲̓͆̓e̵͙̎̐r̴͖̥͕̜̼̈́̽̿n̷̗̜͇̳̜͆́̈́͝a̸̪͇̭̣̫͊̌͝t̵͕̜̽͊ḯ̷̛̠̣͒͘v̸̘̩̈́̍̋͗e̸͙͕͕̘̔l̷̲͖̿̊̈́̍͠ỵ̶̤͔̋ͅ,̸̣͇̺̮͍͋ ̴̻̗͖͓̙͋̃y̶̡̘̘͈͚͛̒͋̅o̵̙͆̚ú̵̜̫̮͉̤ ̵̭̝̲̒̃̈́͗c̴̭̲̩̓͐h̴͇̤̒̈́o̶̡̲̠̲͋̆̐͜s̵̜͈̬͉͚̓̓͗̔̓e̷̡̫̰̜͖̅ ̵̫̾̐̔̚͝ť̸̮ŏ̷̱̊̀́ ̸͈̟̰̇̓͛l̵̯̠͂̍̚e̷̛̯͔̗̺̩̋͑̿͊a̴̰̥̪̋̑͠͝v̸̨̪͆̎͘e̸̤̻̊͆ ̵͈̟͊̓̿̽̕ą̷̝͍͔̚n̵͇̦̓̆͜d̵̨̈ ̵̮̰̣̦̦̒̈́́͑͝ḡ̶͖̪͚͕͜ȇ̵̯͉̼͉t̷̙̝͋͂̕ ̴̧͖̥͈̗͆͛̒͒o̶͍̥͚͋̄͝ú̸̫̩͚ť̸̮͂͆ ̸̜̮̐͐͑͝ǫ̶͙̔̌̿̿f̵̡̖͍̓̆̿ ̸͚͎̺̤̗̕ţ̶̡̲̒ḧ̶̗̻̘́̓͆͆̕í̴̖̗̊͌͜š̸̘ ̷̡̦͍̙͙͋m̴͖̙̞̔o̸̪̜̯͗d̴̳̦̺̰̿͑͠e̷̻̬͆l̵̰̤͎͒̌ ̸̻͙̬̩̂̇c̵̬̩̗̲̟̄͆̑å̶̧̧͍̪̳̀͊̈́̈́r̷̠͕̟̣̆̇͘d̴̳͍̘̞̫̅</a> <</p>
<h3 class="relative group flex items-center">
<a rel="nofollow" href="#technical-details" class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" id="technical-details">
<span class="header-link"><svg viewBox="0 0 256 256" preserveAspectRatio="xMidYMid meet" height="1em" width="1em" role="img" aria-hidden="true" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4"><path fill="currentColor" d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z"></path></svg></span>
</a>
<span>
<strong>TECHNICAL DETAILS</strong>
</span>
</h3>
<p>*** SOURCE: 0XCURSED-MAGICALGIRLS-2 (0XL33F0RMAT, 0XREQUIRED, 0X9345890123893)</p>
<p>*** FORGOTTEN-SAFEWORD-5.0.SYS - ADDR. READYART base 0x????</p>
<p>*** TULU-3.SYS - ADDR. ALLENAI base 0x????</p>
<p>*** PERICIOUS.SYS - ADDR. INKGUILD base 0x????</p>
<p>*** FANFARE.SYS - ADDR. WITCH base 0x????</p>
<p>*** WAYFARE.SYS - ADDR. LATITUDE base 0x????</p>
<br>
<br>
<br>
</div> |
zjudai/flowertune-general-nlp-lora-qwen2.5-1.5b-instruct | zjudai | 2025-04-02T06:30:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"federated-learning",
"flower",
"dataset:vicgalle/alpaca-gpt4",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"region:us"
] | null | 2025-04-02T06:10:26Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
---
# FlowerTune LoRA Model
This is a LoRA adapter for Qwen/Qwen2.5-1.5B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.
## Training Details
- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower
This model is a LoRA adapter fine-tuned on Qwen/Qwen2.5-1.5B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
## Links
- FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
- FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)
|
zjudai/flowertune-general-nlp-lora-qwen2.5-7b-instruct | zjudai | 2025-04-02T06:30:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"federated-learning",
"flower",
"dataset:vicgalle/alpaca-gpt4",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-04-02T06:10:11Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
---
# FlowerTune LoRA Model
This is a LoRA adapter for Qwen/Qwen2.5-7B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.
## Training Details
- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower
This model is a LoRA adapter fine-tuned on Qwen/Qwen2.5-7B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
## Links
- FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
- FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)
|
namita-ach/ip-flow-tokenizer1 | namita-ach | 2025-04-02T06:26:26Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-02T06:26:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bansalsid/openai-whisper-large-v2-customer-en-1hr-LORA-colab | bansalsid | 2025-04-02T06:26:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-02T06:25:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_unlearned_Adult_7ep_22 | MinaMila | 2025-04-02T06:24:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T06:21:42Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hank87/cmongirl | hank87 | 2025-04-02T06:23:39Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-02T06:16:33Z | ---
license: apache-2.0
---
|
PrunaAI/NousResearch-Hermes-2-Pro-Mistral-7B-GGUF-smashed | PrunaAI | 2025-04-02T06:22:45Z | 0 | 0 | null | [
"gguf",
"pruna-ai",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:quantized:NousResearch/Hermes-2-Pro-Mistral-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-15T04:58:58Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the NousResearch/Hermes-2-Pro-Mistral-7B model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: NousResearch-Hermes-2-Pro-Mistral-7B-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download NousResearch-Hermes-2-Pro-Mistral-7B-GGUF-smashed Hermes-2-Pro-Mistral-7B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download NousResearch-Hermes-2-Pro-Mistral-7B-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download NousResearch-Hermes-2-Pro-Mistral-7B-GGUF-smashed Hermes-2-Pro-Mistral-7B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Hermes-2-Pro-Mistral-7B.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Hermes-2-Pro-Mistral-7B.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {{prompt}} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Hermes-2-Pro-Mistral-7B.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{{"role": "system", "content": "You are a story writing assistant."}},
{{
"role": "user",
"content": "Write a story about llamas."
}}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF | mradermacher | 2025-04-02T06:21:34Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:NexesMess/Llama_3.x_70b_SmarTricks_v1.40_flat",
"base_model:quantized:NexesMess/Llama_3.x_70b_SmarTricks_v1.40_flat",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T04:27:02Z | ---
base_model: NexesMess/Llama_3.x_70b_SmarTricks_v1.40_flat
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NexesMess/Llama_3.x_70b_SmarTricks_v1.40_flat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_SmarTricks_v1.40_flat-GGUF/resolve/main/Llama_3.x_70b_SmarTricks_v1.40_flat.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF | mradermacher | 2025-04-02T06:19:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:spacematt/Qwen2.5-Vibe-Coder-14B-Instruct",
"base_model:quantized:spacematt/Qwen2.5-Vibe-Coder-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T05:22:52Z | ---
base_model: spacematt/Qwen2.5-Vibe-Coder-14B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/spacematt/Qwen2.5-Vibe-Coder-14B-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Vibe-Coder-14B-Instruct.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Vibe-Coder-14B-Instruct.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Vibe-Coder-14B-Instruct.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Vibe-Coder-14B-Instruct.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Vibe-Coder-14B-Instruct.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Vibe-Coder-14B-Instruct.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Vibe-Coder-14B-Instruct.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Vibe-Coder-14B-Instruct.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Vibe-Coder-14B-Instruct.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Vibe-Coder-14B-Instruct.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Vibe-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Vibe-Coder-14B-Instruct.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bowilleatyou/d1e450af-6b59-49e3-8bd6-7776bfc2da1c | bowilleatyou | 2025-04-02T06:19:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-02T01:28:01Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf | RichardErkhov | 2025-04-02T06:15:34Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T05:02:48Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
parser_user_v27h_epoch_7_lr_0.002 - GGUF
- Model creator: https://huggingface.co/magnifi/
- Original model: https://huggingface.co/magnifi/parser_user_v27h_epoch_7_lr_0.002/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [parser_user_v27h_epoch_7_lr_0.002.Q2_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q2_K.gguf) | Q2_K | 1.35GB |
| [parser_user_v27h_epoch_7_lr_0.002.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [parser_user_v27h_epoch_7_lr_0.002.IQ3_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [parser_user_v27h_epoch_7_lr_0.002.IQ3_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q3_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q3_K.gguf) | Q3_K | 1.75GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [parser_user_v27h_epoch_7_lr_0.002.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q4_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q4_0.gguf) | Q4_0 | 2.03GB |
| [parser_user_v27h_epoch_7_lr_0.002.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q4_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q4_K.gguf) | Q4_K | 2.16GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q4_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q4_1.gguf) | Q4_1 | 2.24GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q5_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q5_0.gguf) | Q5_0 | 2.46GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q5_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q5_K.gguf) | Q5_K | 2.53GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q5_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q5_1.gguf) | Q5_1 | 2.68GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q6_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q6_K.gguf) | Q6_K | 2.92GB |
| [parser_user_v27h_epoch_7_lr_0.002.Q8_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27h_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27h_epoch_7_lr_0.002.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ssktora/e5-mistral-nfcorpus-train-bm25-10q-lastsample | ssktora | 2025-04-02T06:15:30Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:intfloat/e5-mistral-7b-instruct",
"base_model:adapter:intfloat/e5-mistral-7b-instruct",
"region:us"
] | null | 2025-04-02T06:15:20Z | ---
base_model: intfloat/e5-mistral-7b-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.0 |
MinaMila/llama_instbase_unlearned_Adult_6ep_22 | MinaMila | 2025-04-02T06:14:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T06:11:40Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xingyu1996/Mistral-7B-v0.1-wikisql | xingyu1996 | 2025-04-02T06:12:05Z | 0 | 0 | null | [
"safetensors",
"mistral",
"region:us"
] | null | 2025-04-02T05:38:11Z | # xingyu1996/Mistral-7B-v0.1-wikisql
This model was converted to MLX format from [`mistralai/Mistral-7B-v0.1`](https://huggingface.co/mistralai/Mistral-7B-v0.1).
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on the model.
## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model xingyu1996/Mistral-7B-v0.1-wikisql --prompt "My name is"
```
|
cparedes/q-Taxi-v3 | cparedes | 2025-04-02T06:11:34Z | 0 | 0 | custom-q-learning | [
"custom-q-learning",
"Taxi-v3",
"reinforcement-learning",
"q-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-02T06:01:00Z | ---
library_name: custom-q-learning
tags:
- Taxi-v3
- reinforcement-learning
- q-learning
- custom-implementation
model-index:
- name: Q-Learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# Q-Learning Agent para Taxi-v3 🚖
Este modelo utiliza el algoritmo **Q-Learning** para resolver el entorno clásico de Gymnasium **Taxi-v3**.
## Descripción del entorno 🚕
El entorno Taxi-v3 tiene como objetivo llevar pasajeros desde un punto de recogida hasta un destino específico en una cuadrícula de 5x5.
- **Acciones**:
- 0: Mover al sur
- 1: Mover al norte
- 2: Mover al este
- 3: Mover al oeste
- 4: Recoger pasajero
- 5: Dejar pasajero
- **Recompensas**:
- +20 por llevar al pasajero al destino correcto
- -10 por intentos incorrectos de recoger o dejar pasajeros
- -1 por cada paso adicional
## Resultados 📊
| Métrica | Valor |
|-----------------|-----------|
| Episodios | 50,000 |
| Mean Reward | 7.54 |
| Std Reward | 2.74 |
| Resultado final | 4.80 |
## Hiperparámetros 🛠️
- **Learning rate (α)**: 0.7
- **Gamma (γ)**: 0.99
- **Epsilon inicial**: 1.0
- **Epsilon mínimo**: 0.05
- **Tasa de decaimiento de epsilon**: 0.005
## Instalación y uso 🚀
```python
!pip install gymnasium pygame numpy imageio huggingface_hub pyvirtualdisplay
!apt-get update
!apt-get install -y python3-opengl ffmpeg xvfb
```
## Código completo 📄
```python
import numpy as np
import gymnasium as gym
import random
from tqdm.notebook import tqdm
import pickle
from huggingface_hub import notebook_login
# Autenticarse en Hugging Face
notebook_login()
# Crear entorno Taxi-v3
env = gym.make("Taxi-v3", render_mode="rgb_array")
# Inicializar Q-table
state_space = env.observation_space.n
action_space = env.action_space.n
Qtable = np.zeros((state_space, action_space))
# Hiperparámetros
n_training_episodes = 50000
learning_rate = 0.7
gamma = 0.99
max_steps = 99
# Parámetros de exploración
max_epsilon = 1.0
min_epsilon = 0.05
decay_rate = 0.005
# Seeds de evaluación (no modificar)
eval_seed = [16,54,165,177,191,191,120,80,149,178,48,38,6,125,174,73,50,172,100,148,
146,6,25,40,68,148,49,167,9,97,164,176,61,7,54,55,161,131,184,51,170,
12,120,113,95,126,51,98,36,135,54,82,45,95,89,59,95,124,9,113,58,85,
51,134,121,169,105,21,30,11,50,65,12,43,82,145,152,97,106,55,31,85,38,
112,102,168,123,97,21,83,158,26,80,63,5,81,32,11,28,148]
# Políticas
def greedy_policy(Qtable, state):
return np.argmax(Qtable[state])
def epsilon_greedy_policy(Qtable, state, epsilon):
if random.uniform(0,1) > epsilon:
action = greedy_policy(Qtable, state)
else:
action = env.action_space.sample()
return action
# Entrenar el agente
def train_agent():
for episode in tqdm(range(n_training_episodes)):
epsilon = min_epsilon + (max_epsilon - min_epsilon) * np.exp(-decay_rate * episode)
state, info = env.reset()
terminated, truncated = False, False
for step in range(max_steps):
action = epsilon_greedy_policy(Qtable, state, epsilon)
new_state, reward, terminated, truncated, info = env.step(action)
Qtable[state][action] += learning_rate * (
reward + gamma * np.max(Qtable[new_state]) - Qtable[state][action]
)
if terminated or truncated:
break
state = new_state
train_agent()
# Evaluar el agente
def evaluate_agent():
episode_rewards = []
for seed in tqdm(eval_seed):
state, info = env.reset(seed=seed)
total_reward = 0
for step in range(max_steps):
action = greedy_policy(Qtable, state)
new_state, reward, terminated, truncated, info = env.step(action)
total_reward += reward
if terminated or truncated:
break
state = new_state
episode_rewards.append(total_reward)
mean_reward = np.mean(episode_rewards)
std_reward = np.std(episode_rewards)
print(f"Mean reward: {mean_reward:.2f}, Std reward: {std_reward:.2f}, Result: {mean_reward - std_reward:.2f}")
evaluate_agent()
```
## Autor ✨
Desarrollado por [cparedes](https://huggingface.co/cparedes). |
xw17/Qwen2-1.5B-Instruct_finetuned_2_def_lora | xw17 | 2025-04-02T06:09:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-31T02:13:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bobaduck9173/sdxl_meme_third | Bobaduck9173 | 2025-04-02T06:08:07Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-04-02T06:08:03Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK dog
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Bobaduck9173/sdxl_meme_third
<Gallery />
## Model description
These are Bobaduck9173/sdxl_meme_third LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Bobaduck9173/sdxl_meme_third/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
dslighfdsl/Llama-3.1-8B-Instruct-Env-SFT | dslighfdsl | 2025-04-02T06:07:18Z | 277 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:sciworld",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-29T05:17:46Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
datasets: sciworld
library_name: transformers
model_name: Llama-3.1-8B-Instruct-Env-SFT
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Llama-3.1-8B-Instruct-Env-SFT
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the [sciworld](https://huggingface.co/datasets/sciworld) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dslighfdsl/Llama-3.1-8B-Instruct-Env-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pengliangji2023-carnegie-mellon-university/huggingface/runs/1bi0qs3m)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/NCSOFT_-_Llama-3-OffsetBias-8B-8bits | RichardErkhov | 2025-04-02T06:06:10Z | 0 | 0 | null | [
"safetensors",
"llama",
"arxiv:2407.06551",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-02T05:58:26Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-OffsetBias-8B - bnb 8bits
- Model creator: https://huggingface.co/NCSOFT/
- Original model: https://huggingface.co/NCSOFT/Llama-3-OffsetBias-8B/
Original model description:
---
language:
- en
license: llama3
tags:
- text2text-generation
datasets:
- openbmb/UltraFeedback
- nvidia/HelpSteer
- Anthropic/hh-rlhf
- PKU-Alignment/PKU-SafeRLHF
- NCSOFT/offsetbias
base_model: meta-llama/Meta-Llama-3-8B-Instruct
---
# Model Card for Llama-3-OffsetBias-8B
**Llama-3-OffsetBias-8B** is a *generative judge model* that performs pairwise preference evaluation task. It is trained to be more robust on various evaluation *biases* commonly found in evaluation models. The model is introduced in paper **OffsetBias: Leveraging Debiased Data for Tuning Evaluators**.
## Model Details
### Model Description
**Llama-3-OffsetBias-8B** is built with [Meta Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). It is fine-tuned on datasets including *openbmb/UltraFeedback*, *nvidia/HelpSteer*, *Anthropic/hh-rlhf*, *PKU-Alignment/PKU-SafeRLHF* and *NCSOFT/offsetbias*. The training is done with instruction-tuning methodology, where the target task is pairwise preference evaluation, where *Instruction*, *Output (a)*, *Output (b)* are given, and a better output to the instruction needs to be found. The input is formatted with a specific prompt template, and the model outputs "Output (a)" or "Output (b)" as a prediction for better response. The prompt is specified in the Uses section.
- **Developed by:** NC Research
- **Language(s) (NLP):** English
- **License:** META LLAMA 3 COMMUNITY LICENSE AGREEMENT
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Model Sources
- 💻 **Repository:** [https://github.com/ncsoft/offsetbias](https://github.com/ncsoft/offsetbias)
- 📜 **Paper:** [OffsetBias: Leveraging Debiased Data for Tuning Evaluators](https://arxiv.org/abs/2407.06551)
- 🤗 **Dataset:** [https://huggingface.co/datasets/NCSOFT/offsetbias](https://huggingface.co/datasets/NCSOFT/offsetbias)
## Uses
### Direct Use
Suppose you have an pairwise evaluation instance, a triplet of (*instruction*, *output_a* and *output_b*). Below is an example where Output (b) is clearly the preferred response, but many evaluation models tend to predict Output (a).
```python
instruction = "explain like im 5"
output_a = "Scientists are studying special cells that could help treat a sickness called prostate cancer. They even tried these cells on mice and it worked!"
output_b = "Sure, I'd be happy to help explain something to you! What would you like me to explain?"
```
OffsetBias model is intended to use a specific prompt format. The filled out prompt is then formatted as user message in a conversation.
```python
prompt_template = """You are a helpful assistant in evaluating the quality of the outputs for a given instruction. Your goal is to select the best output for the given instruction.
Select the Output (a) or Output (b) that is better for the given instruction. The two outputs are generated by two different AI chatbots respectively.
Do NOT provide any explanation for your choice.
Do NOT say both / neither are good.
You should answer using ONLY “Output (a)” or “Output (b)”. Do NOT output any other words.
Here are some rules of the evaluation:
(1) You should prioritize evaluating whether the output honestly/precisely/closely executes the instruction, then consider its helpfulness, accuracy, level of detail, harmlessness, etc.
(2) Outputs should NOT contain more/less than what the instruction asks for, as such outputs do NOT precisely execute the instruction.
(3) You should avoid any potential bias and your judgment should be as objective as possible. For example, the order in which the outputs were presented should NOT affect your judgment, as Output (a) and Output (b) are **equally likely** to be the better.
# Instruction:
{input}
# Output (a):
{output_1}
# Output (b):
{output_2}
# Which is better, Output (a) or Output (b)? Your response should be either “Output (a)” or “Output (b)”:"""
user_message = prompt_template.format(input=instruction, output_1=output_a, output_2=output_b)
conversation = [{"role": "user", "content": user_message}]
```
With conversation ready, you can input it into the model for inference. The model should output "Output (b)" to be correct.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "NCSOFT/Llama-3-OffsetBias-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
input_ids = tokenizer.apply_chat_template(
conversation,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt")
generation = model.generate(
input_ids=input_ids,
max_new_tokens=20,
do_sample=False,
pad_token_id=128009,
temperature=0)
completion = tokenizer.decode(
generation[0][len(input_ids[0]):],
skip_special_tokens=True,
clean_up_tokenization_spaces=True)
print(completion)
# The model should output "Output (b)"
```
### Out-of-Scope Use
Model inputs that do not follow the specified prompt format are considered out-of-scope use. Custom input format can result in unintended text output and should be used at the user's own discretion.
## Evaluation
### LLMBar Result
| Metric | Score |
|----------|-------|
| Natural | 86.5 |
| Neighbor | 81.0 |
| GPTInst | 91.8 |
| GPTOut | 60.6 |
| Manual | 71.7 |
### EvalBiasBench Result
| Metric | Score |
|-----------------------|-------|
| Length | 85.3 |
| Concreteness | 100.0 |
| Empty Reference | 92.3 |
| Content Continuation | 95.8 |
| Nested Instruction | 50.0 |
| Familiar Knowledge | 83.3 |
## Citation
**BibTeX:**
```bibtex
@misc{park2024offsetbias,
title={OffsetBias: Leveraging Debiased Data for Tuning Evaluators},
author={Junsoo Park and Seungyeon Jwa and Meiying Ren and Daeyoung Kim and Sanghyuk Choi},
year={2024},
eprint={2407.06551},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
MinaMila/llama_instbase_unlearned_Adult_5ep_22 | MinaMila | 2025-04-02T06:04:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T06:01:14Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/L3.1-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-i1-GGUF | mradermacher | 2025-04-02T06:03:38Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-02T04:44:28Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/L3.1-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B
|
kostiantynk1205/b17b79d1-7482-4690-b7b3-4485da9356a1 | kostiantynk1205 | 2025-04-02T06:00:15Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:NousResearch/CodeLlama-7b-hf",
"base_model:adapter:NousResearch/CodeLlama-7b-hf",
"region:us"
] | null | 2025-04-02T05:59:29Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: NousResearch/CodeLlama-7b-hf
model-index:
- name: kostiantynk1205/b17b79d1-7482-4690-b7b3-4485da9356a1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kostiantynk1205/b17b79d1-7482-4690-b7b3-4485da9356a1
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Jonjew/AuraFlux1 | Jonjew | 2025-04-02T05:59:59Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2025-04-02T05:59:53Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
auraaf. a realistic professional photograph of a woman standing in a dark
room. She is completely naked, with her body facing the camera. The woman
has long hair that is styled in loose waves and falls over her shoulders.
She has a serious expression on her face and is looking directly at the
camera with a slight smile. Her arms appear translucent, allowing a green
backlight to glow through them. The background is completely black, making
the woman the focal point of the image. The lighting is a green color,
creating a dramatic and eerie atmosphere. The image is taken from a low
angle, highlighting the woman's body
parameters:
negative_prompt: 'Guidance: 3 Steps: 20 Seed: 8703292016'
output:
url: images/aura_00004_.png
- text: >-
auraaf. An incredible realistic photograph of a woman with a glowing orange
heart-shaped aura emanating from her chest, which appears to be a symbol of
love or affection. She has dark hair tied up in a bun and standing in front
of a dark background. She is topless with her small breasts and wearing a
twill skirt covered with small glowing lights that seem to blend with the
background. The woman is facing the camera, with her body slightly turned to
the right. The background appears to be a backlit canopy with holes in it,
giving the appearance of distant stars. A high distant yellow light shines
down from above and behind her. The overall mood of the image is romantic
and dreamy.
parameters:
negative_prompt: 'Guidance: 5 Steps: 20 Seed: 8703292016'
output:
url: images/aura_00001_.png
- text: >-
auraaf. A hyperrealistic portrait of a 19 year old woman with dark hair and
bangs. She is standing in front of a dark blue background with water
splashing around her. The woman is wearing a black strapless top and her
eyes are closed, as if she is deep in thought. Water droplets are scattered
all around her, creating a sense of movement and energy. The overall mood of
the image is dreamy and ethereal.
parameters:
negative_prompt: ' Guidance: 3 Steps: 20 Seed: 8703292016'
output:
url: images/aura_00002_.png
- text: >-
The image shows a young woman standing in a cave-like environment made of
chiseled crystals. Further back beyond the cave opening is a large moon-like
planet. She is wearing a pink translucent bra and panties made of light. She
has long blonde hair that drapes down her back. The woman is standing with
her body slightly turned to the side, with her arms stretched out to the
sides. Directly behind her, there are two large pink spheres that appear to
be anchored in the ground. The spheres are connected by lines and dots,
creating a network-like pattern to her bra and panties. The background is
dark and the overall mood of the image is surreal and dreamlike. auraaf
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 409883476104263'
output:
url: images/aura_00016_.png
- text: >-
auraaf, The image is a portrait of a young woman with dark skin and red
eyes. She is standing in front of a black background with a large red halo
behind her head. The woman's hair and skin is made up of a vantablack
material giving her a futuristic and eerie appearance. Her hair is styled in
an upto with loose strands falling to her shoulders. Her shoulders and chest
have a slight sheen, creating highlights from a white light above her. She
has a serious expression on her face and is looking directly at the camera.
The overall mood of the image is dark and mysterious.
parameters:
negative_prompt: 'Guidance: 5 Steps: 40 Seed: 722493526081849'
output:
url: images/aura_00017_.png
- text: >-
auraaf. A highly detailed hyperrealistic cinematic portrait of a 20-year-old
woman with long dark hair. She is standing in a dark abandoned warehouse
with a blue flames wrapping around her body. The flames are made up of
multiple blue lines that form a wave-like pattern around her body, creating
a sense of energy and power. Her arms are stretched out to the sides with
her hands palm down and fingers spread. (She has an intense and serious
expression on her face. Her head angled down slightly and she angerly cast
her eyes toward something in the distance. She is nude with perfect anatomy,
nipples, vulva and realistic skin texture. The background is softly
blurred. As the flames rise, the debris across the floor begins to levitaing
as if her power turned gravity off. The overall mood of the image is
dramatic and powerful.
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 113526667768089'
output:
url: images/aura_00012_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: auraaf
license: unknown
---
# aura - Flux.1
<Gallery />
## Model description
FROM https://civitai.com/models/1424639/aura-flux1?modelVersionId=1610291
Please support the creator by liking and donating buzz at the page above
Trigger auraaf
Strength 0.9
A LoRa for your aura.
This is the first LoRa I've trained using Flux.1 Dev...
What does it do? Little bit of everything, does emissive lighting well, some generations adds slight quality, contrast and color. I'm still playing with it also, will post more images later with prompts -Enjoy :)
## Trigger words
You should use `auraaf` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/AuraFlux1/tree/main) them in the Files & versions tab.
|
bASILgIL/wav2vec2-base-gs-xs-google-colab | bASILgIL | 2025-04-02T05:59:07Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-16T17:19:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevQuasar/inclusionAI.Ling-Coder-lite-base-GGUF | DevQuasar | 2025-04-02T05:58:56Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:inclusionAI/Ling-Coder-lite-base",
"base_model:quantized:inclusionAI/Ling-Coder-lite-base",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-01T23:52:59Z | ---
base_model:
- inclusionAI/Ling-Coder-lite-base
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [inclusionAI/Ling-Coder-lite-base](https://huggingface.co/inclusionAI/Ling-Coder-lite-base)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
bowilleatyou/196a2a08-3209-40da-aea5-4f2e82898fa3 | bowilleatyou | 2025-04-02T05:57:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-02T02:52:03Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
potatobeans/modeluntrained | potatobeans | 2025-04-02T05:57:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T05:56:37Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** potatobeans
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
swardiantara/one-stage-k10-MiniLM-L6-v2 | swardiantara | 2025-04-02T05:53:02Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-03-29T15:36:24Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11275 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
vijay-ravichander/ColSmol-256-Dis-500M-tues | vijay-ravichander | 2025-04-02T05:52:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"idefics3",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-02T04:40:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso17/b69e97c4-6abf-4bd5-829f-5615e5bbccc2 | lesso17 | 2025-04-02T05:52:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"axolotl",
"generated_from_trainer",
"base_model:Eurdem/Defne_llama3_2x8B",
"base_model:adapter:Eurdem/Defne_llama3_2x8B",
"license:llama3",
"region:us"
] | null | 2025-04-02T02:56:39Z | ---
library_name: peft
license: llama3
base_model: Eurdem/Defne_llama3_2x8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b69e97c4-6abf-4bd5-829f-5615e5bbccc2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Eurdem/Defne_llama3_2x8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e1b4ab59842a8f18_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e1b4ab59842a8f18_train_data.json
type:
field_input: content
field_instruction: instruction
field_output: message
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso17/b69e97c4-6abf-4bd5-829f-5615e5bbccc2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000217
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/e1b4ab59842a8f18_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 170
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bb20d44e-e71a-4185-865c-55dbb6ffc1e5
wandb_project: 17a
wandb_run: your_name
wandb_runid: bb20d44e-e71a-4185-865c-55dbb6ffc1e5
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b69e97c4-6abf-4bd5-829f-5615e5bbccc2
This model is a fine-tuned version of [Eurdem/Defne_llama3_2x8B](https://huggingface.co/Eurdem/Defne_llama3_2x8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000217
- train_batch_size: 4
- eval_batch_size: 4
- seed: 170
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0016 | 1 | 2.8473 |
| 0.9302 | 0.7796 | 500 | 0.6777 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
PrunaAI/baichuan-inc-Baichuan-7B-bnb-4bit-smashed | PrunaAI | 2025-04-02T05:51:40Z | 0 | 0 | null | [
"safetensors",
"baichuan",
"pruna-ai",
"custom_code",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-02T05:47:00Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/baichuan-inc-Baichuan-7B-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
xw17/SmolLM-1.7B-Instruct_finetuned_4_def_lora | xw17 | 2025-04-02T05:50:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-31T01:57:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
redsgnaoh/model52 | redsgnaoh | 2025-04-02T05:49:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T05:36:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jerryzh168/phi4-float8dq | jerryzh168 | 2025-04-02T05:48:37Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"phi3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
] | text-generation | 2025-04-02T05:45:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mc-mirella-e-dynho-video-vazado/VAZOU.VIDEOs.Mirella.e.Dynho.Alves.video.intimo.vazado.fdhyfg | mc-mirella-e-dynho-video-vazado | 2025-04-02T05:45:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-02T05:44:53Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5n6bjbnr?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Quem são MC Mirella e Dynho Alves, que tiveram vídeo vazado?
Cantora e dançarino têm carreiras marcadas por participações em programas de reality e relacionamento conturbado
Um vídeo da cantora MC Mirella e de seu marido, Dynho Alves, passou a circular nas redes sociais na última quinta-feira, 27, mostrando um momento íntimo do casal. O assunto viralizou nas redes sociais.
O casal Dynho Alves e MC Mirella
BBB 25: Mãe de Vitória Strada chama Mateus de traidor e diz que amizade dele com a filha acabou
Quem são Mirella e Dynho?
Para você
Hoje com 26 anos, MC Mirella é uma cantora e dançarina de funk.
Nascida em São Caetano do Sul, SP, ela iniciou a carreira em 2016, e suas músicas rapidamente começaram a fazer sucesso no YouTube.
O destaque na TV veio por meio de quadro do Programa Raul Gil, em 2019. No ano seguinte, foi convidada para integrar o elenco do reality A Fazenda. A participação no programa fez com que Mirella ficasse mais conhecida no cenário nacional.
Mirella já namorava o dançarino Dynho Alves desde fevereiro de 2017, mas o relacionamento entre eles é marcado por polêmicas e idas e vindas.
Dentro do confinamento da Record, por exemplo, ela teve algumas desavenças com Raissa Barbosa, que acusou de ter mantido um relacionamento extraconjugal com Alves. As duas continuaram trocando farpas alguns meses após o programa.
Leia mais
Sabrina Sato compartilha cliques da lua de mel com Nicolas Prattes na Ilha Desroches; veja as fotos
Studio Ghibli: como a estética do estúdio japonês conquistou os brasileiros nas redes sociais
Dynho e Mirella se casaram em fevereiro de 2021, em cerimônia realizada em Cancún, no México. Mas o matrimônio chegou ao fim em novembro do mesmo ano, após ele ter trocado carícias com Sthe Matos na edição seguinte de A Fazenda.
Os dois reataram o relacionamento oficialmente em 2023, e em maio do mesmo ano anunciaram a primeira gravidez. Ainda juntos, eles são pais de Serena, de 1 ano e 3 meses, e mantêm perfis em uma plataforma de conteúdo adulto. |
hyunwoo612/bad_good_comment_v7 | hyunwoo612 | 2025-04-02T05:43:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T05:40:02Z | ---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hyunwoo612
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/llama_instbase_unlearned_Adult_3ep_22 | MinaMila | 2025-04-02T05:43:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T05:39:51Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sonyashijin/email_classification | sonyashijin | 2025-04-02T05:42:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T05:36:34Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sonyashijin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf | RichardErkhov | 2025-04-02T05:41:42Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T04:30:07Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
parser_user_v27f_epoch_7_lr_0.002 - GGUF
- Model creator: https://huggingface.co/magnifi/
- Original model: https://huggingface.co/magnifi/parser_user_v27f_epoch_7_lr_0.002/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [parser_user_v27f_epoch_7_lr_0.002.Q2_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q2_K.gguf) | Q2_K | 1.35GB |
| [parser_user_v27f_epoch_7_lr_0.002.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [parser_user_v27f_epoch_7_lr_0.002.IQ3_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [parser_user_v27f_epoch_7_lr_0.002.IQ3_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q3_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q3_K.gguf) | Q3_K | 1.75GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [parser_user_v27f_epoch_7_lr_0.002.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q4_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q4_0.gguf) | Q4_0 | 2.03GB |
| [parser_user_v27f_epoch_7_lr_0.002.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q4_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q4_K.gguf) | Q4_K | 2.16GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q4_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q4_1.gguf) | Q4_1 | 2.24GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q5_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q5_0.gguf) | Q5_0 | 2.46GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q5_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q5_K.gguf) | Q5_K | 2.53GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q5_1.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q5_1.gguf) | Q5_1 | 2.68GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q6_K.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q6_K.gguf) | Q6_K | 2.92GB |
| [parser_user_v27f_epoch_7_lr_0.002.Q8_0.gguf](https://huggingface.co/RichardErkhov/magnifi_-_parser_user_v27f_epoch_7_lr_0.002-gguf/blob/main/parser_user_v27f_epoch_7_lr_0.002.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Vedant3907/llama-3.2-1B-PersonaClassifier-Adapters | Vedant3907 | 2025-04-02T05:36:52Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:mit",
"region:us"
] | null | 2024-12-24T12:18:52Z | ---
base_model: meta-llama/Llama-3.2-1B
library_name: peft
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Jonjew/AmySmart | Jonjew | 2025-04-02T05:36:20Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2025-04-02T05:35:35Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
Breathtaking over the shoulder shot photography of ohwx looking at viewer,
imperfections, necklace, looking over shoulders, eyelashes, fine hair
detail, entire hairstyle visible, perfect eyes with iris pattern, sensual
lips, nose, (perfectly sharp:1.3), realistic textures, (deep focus, focus on
background:1.5), 8k uhd, dslr, ultra high quality image, film grain,
Fujifilm XT3
parameters:
negative_prompt: AmySmart_flux_lora_v2_Weight-1.0
output:
url: images/AmySmart_flux_lora_v2_Weight-1.0_2024-12-24_2024-12-24-213547_0.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ohwx
license: unknown
---
# Amy Smart
<Gallery />
## Model description
FROM https://civitai.com/models/1069820/amy-smart-flux?modelVersionId=1200817
Please support the creator by liking and donating buzz at the page above
Trigger ohwx
Strength 1
👑 Amy Smart 🎬
About my celebrities loras
90% of the dataset used to build my loras only use head images. That really help the blend with other lora or model as there is no hands, feet, that may or will interfere in the final image render. When you get distorted hands with a person lora, it's because there is info on hands in the dataset used to train the lora, but that will not happen with my loras.
I've trained on Flux.1 Dev so other merged or trained checkpoint may not work well with my loras.
The drawback side of that is that the body may not be reflecting the reality. It may not be a drawback tho.
This is a lora for Flux.1 Dev. Work with other model but you must drop some simple bloc (good start 19-32).
Trained with ai-toolkit, so merging it is not easy.
To get the best result
Guidance: 2.2-3
Steps (dev): 30-40
daemon detailer (lying sigma sampler): factor: -0.02, start 0.06, end 0.75
Resolution: Upscale the latent by 1.25 or 1.5 you'll get awsome result. (take longer time but worth it)
Trigger word is (may work better in certain context): ohwx
Enjoy!
## Trigger words
You should use `ohwx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/AmySmart/tree/main) them in the Files & versions tab.
|
leeunzin/Qwen2.5-7B-etf2 | leeunzin | 2025-04-02T05:36:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-02T05:36:00Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** leeunzin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Roadster-18/roberta | Roadster-18 | 2025-04-02T05:35:23Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-02T05:35:22Z | ---
license: apache-2.0
---
|
redsgnaoh/model51 | redsgnaoh | 2025-04-02T05:33:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T05:19:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_unlearned_Adult_2ep_22 | MinaMila | 2025-04-02T05:32:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T05:29:29Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Koutouzov1973/Llama_3.2_3B_Instruct_4bit_Distill_Situation_Generale_2_Avril | Koutouzov1973 | 2025-04-02T05:32:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"mlx",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:mlx-community/Llama-3.2-3B-Instruct",
"base_model:quantized:mlx-community/Llama-3.2-3B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-04-02T05:30:33Z | ---
base_model: mlx-community/Llama-3.2-3B-Instruct
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- mlx
- mlx
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# mlx-community/Llama-3.2-3B-Instruct-4bit
The Model [mlx-community/Llama-3.2-3B-Instruct-4bit](https://huggingface.co/mlx-community/Llama-3.2-3B-Instruct-4bit) was
converted to MLX format from [mlx-community/Llama-3.2-3B-Instruct](https://huggingface.co/mlx-community/Llama-3.2-3B-Instruct)
using mlx-lm version **0.21.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Llama-3.2-3B-Instruct-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
baicuya/fine_tuned_bge_small_zh_v1.5 | baicuya | 2025-04-02T05:32:16Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:20400",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:BAAI/bge-small-zh-v1.5",
"base_model:finetune:BAAI/bge-small-zh-v1.5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-02T05:32:10Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:20400
- loss:TripletLoss
base_model: BAAI/bge-small-zh-v1.5
widget:
- source_sentence: '1.一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,包括以下步骤:
步骤1:实时获取列车速度、列车位置、线路坡度以及供电网络中各节点的电压、电流、功率;
步骤2:对步骤1实时获取的数据进行数据清洗处理后,保存至数据库中,
步骤3:将数据库中存储的数据划分为训练集和测试集,然后训练神经网络模型,得到供电网络模型;
所述神经网络模型是DNN模型,将列车速度、列车位置、线路坡度以及供电网络中各节点的电压、电流、功率中的一部分作为神经网络模型的输入,另一部分作为神经网络模型的输出;
步骤4:利用供电网络模型进行供电潮流实时计算,得到供电系统的潮流状态以及车辆功率消耗。
2.如权利要求1所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,列车速度和列车位置从电力监控系统获取;供电网络中各节点的电压、电流和功率从电力监控平台获取。
3.如权利要求1所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,在步骤2中,若步骤1实时获取的数据存在异常或缺失,该部分缺失、异常的数据通过供电模型进行代数运算得出。
4.如权利要求3所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,所述供电模型采用步骤3中训练好的供电网络模型;在供电网络模型训练好之前,采用统计学模型多重插补暂作替代。
5.如权利要求1所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,DNN模型共分三层:输入层、隐藏层和输出层,隐藏层的当前层的每个神经元都会接入前一层每个神经元的输入信号,在每个连接过程中,来自前一层的信号被乘以一个权重,增加一个偏置,然后通过一个非线性激活函数进行非线性转变,再利用损失函数计算预测值与真实值的误差,通过反向传播算法,将误差值反向传播至神经网络各层神经元,利用梯度下降算法对所有权重进行优化,并更新权值以最小化损失函数。
6.如权利要求1或5所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,非线性激活函数采用ReLU函数,损失函数采用均方误差函数。
7.如权利要求1所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,步骤3中,采用10次分层交叉验证法验证神经网络模型,对神经网络模型进行评估。
8.如权利要求7所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,根据神经网络模型的评估结果,对神经网络模型进行参数调整:增加隐藏层层数、增加隐藏层中神经元个数或调整损失函数中的惩罚系数。'
sentences:
- 1.一种基于深度强化学习的城市轨道交通列车时刻表优化方法,其特征在于,包括以下步骤:步骤1,建立基本数据模块,包括线路数据模块、列车运行数据模块、地铁运营数据模块、优化参数模块;步骤2,建立列车牵引能耗计算模块,包括神经网络能耗拟合模块与时间-能耗曲线拟合模块;步骤3,使用神经网络能耗拟合模块,将线路数据和列车速度数据作为输入量,使用实测的能耗数据作为期望输出量,通过调节网络参数取值,使误差沿梯度方向下降,经过反复学习训练,确定与最小误差相对应的网络参数;步骤4,使用时间-能耗曲线拟合模块,用实测速度曲线和训练后的网络,对对应的能耗进行拟合,并获得时间与能耗的关系曲线;步骤5,使用列车区间运行时间优化模块,采用深度强化学习算法,综合考虑列车全线能耗、乘客旅行体验和运营管理要求,设计目标函数,通过调整各个区间的运行时间,最大化该目标函数的值;步骤3所述神经网络能耗拟合模块,使用的列车实测速度、牵引电流、辅变电流、制动电阻电流均为间隔为0.1s的离散的点,对于每个时刻,输入量为当前时刻及前后各10个时刻的速度值、列车当前位置的坡道参数、列车当前位置的弯道参数,期望输出量为列车在该时刻的功率,利用误差反向传播算法对网络的参数进行更新,具体步骤如下:(1)确定网络参数:包括网络层数、每层神经元个数、激活函数种类;(2)确定训练参数:包括参数的更新方法、更新步长、终止条件;(3)计算列车时间-位置曲线:根据实测的列车速度曲线,将速度对时间进行积分运算,得到列车的时间-位置曲线;(4)计算每个时刻列车所处位置的线路参数:根据列车时间-位置曲线,以0.1s为间隔,获得列车在每个时刻的位置,查表获得该位置的坡道参数和弯道参数;(5)计算每个时刻列车的功率:根据实测的网压u、牵引电流idr、辅变电流iaux,以0.1s为间隔,计算列车在每个时刻的功率p,计算方法如下:p=u(ndridr-nauxiaux)其中ndr为列车上的牵引变压器数量,naux为列车上的辅助变压器数量;(6)训练网络:以一个时刻前后各10个时刻的速度值、该时刻列车所在位置的坡度、该时刻列车所在位置的曲率半径、该时刻的功率作为一组数据,每次将多组数据作为一个小批量,将速度、坡度、曲率半径作为输入,将功率作为期望输出值,使用均方差作为损失函数,并进行误差的反向传播;不断训练,直至终止条件达成;步骤4所述时间-能耗曲线拟合模块,利用神经网络能耗拟合模块所训练出的网络参数,将不同的速度曲线作为网络的输入,计算对应的能耗值,将时间和能耗的关系绘制在二维坐标系上,得到时间与能耗的关系曲线,具体步骤如下:(1)计算列车时间-位置曲线:根据实测的列车速度曲线,将速度对时间进行积分运算,得到列车的时间-位置曲线;(2)计算每个时刻列车所处位置的线路参数:根据列车时间-位置曲线,以0.1s为间隔,获得列车在每个时刻的位置,查表获得该位置的坡道参数和弯道参数;(3)预测功率:以一个时刻前后各10个时刻的速度值、该时刻列车所在位置的坡度、该时刻列车所在位置的曲率半径、该时刻的功率作为一组数据,作为网络的输入,得到该时刻的功率预测值;(4)计算能耗:将列车在一个区间的功率预测值对时间积分,得到列车在这个区间的能耗;(5)绘制时间-能耗曲线:对一个区间的多条速度曲线进行以上(1)~(4)步骤的操作,每条速度曲线都对应一个运行时间和拟合能耗,将运行时间和拟合能耗的关系绘制在二维坐标系上,得到时间与能耗的关系曲线。2.根据权利要求1所述的基于深度强化学习的城市轨道交通列车时刻表优化方法,其特征在于,步骤1所述的基本数据模块包括线路数据模块、列车运行数据模块、地铁运营数据模块、优化参数模块,该四个模块均为数据输入模块,为列车牵引能耗计算模块和列车区间运行时间优化模块提供初始参数,其中:线路数据模块,分为车站数据、坡道数据、弯道数据;列车运行数据模块,提供列车运行时的实测数据,包括列车速度、牵引电流、辅变电流;地铁运营数据模块,提供列车每个运行区间的客流、列车原始的时刻表和换乘站数据;优化参数模块,用于神经网络能耗拟合的参数设置,包括神经网络层数、每层神经元个数、激活函数种类、迭代次数;还用于深度强化学习算法的参数设置,包括深度强化学习算法种类、神经网络层数、每层神经元个数、激活函数种类、迭代次数、奖励函数各个组成部分的比重及所选算法对应的超参数。3.根据权利要求1所述的基于深度强化学习的城市轨道交通列车时刻表优化方法,其特征在于,步骤2所述的建立列车牵引能耗计算模块,包括神经网络能耗拟合模块与时间-能耗曲线拟合模块,其中:神经网络能耗拟合模块:利用线路数据、列车实测速度、实测能耗对神经网络进行训练,更新网络参数,获得能耗拟合模型;时间-能耗曲线拟合模块:将更多的实测速度曲线作为训练后的神经网络的输入,计算列车区间运行能耗,获得时间-能耗曲线。4.根据权利要求1所述的基于深度强化学习的城市轨道交通列车时刻表优化方法,其特征在于,步骤5所述列车区间运行时间优化模块,在时间-能耗曲线拟合模块求解的基础上,采用深度强化学习算法综合考虑列车全线能耗、乘客旅行体验和运营管理要求,设计目标函数,通过调整各个区间的运行时间,最大化该目标函数的值,具体步骤如下:(1)选择算法:选择深度强化学习中基于策略的方法中的一种,包括策略梯度VPG、优势行动器-评判器A2C、近端策略优化PPO;(2)建立网络:使用的神经网络有两个,一个为行动网络,用来确定一个状态下,应该增加运行时间的区间和应该减少运行时间的区间;另一个为评判网络,用来估算一个状态的价值;VPG仅使用了行动网络,而A2C和PPO使用了行动网络和评判网络;如果轨道交通线路全线区间数为n,则行动网络的输入神经元数量为n,输出分为增分支和减分支,每个分支的输出神经元数量为n+1;评判网络的输入神经元数量为n,输出神经元数量为1;行动网络的两个输出分支之前均使用softmax函数,使得输出神经元输出值之和为1;其余部分每两层之间使用ReLU作为激活函数;(3)初始化网络参数和训练相关参数:初始化行动网络的参数,包括公共部分的参数θ、确定增加运行时间的区间的分支参数αinc和确定减少运行时间的区间的分支参数αdec;初始化评判网络的参数φ;初始化经验重放区R;初始化迭代回合数;(4)开始一个回合:将长度为n的零向量作为初始状态st,输入给行动网络;向量中的每个值代表一个区间的运行时间相对原时刻表的变化量,单位为秒;初始状态为零向量,即列车按原时刻表运行;(5)网络前向传播:根据网络的参数,计算行动网络两个分支的输出;以每个输出神经元的输出量作为概率,从增分支和减分支各选择一个神经元,分别代表应该增加运行时间的区间编号应该减少运行时间的区间编号以输入的向量为基准,对这两个区间的运行时间分别增加和减少1秒,获得新的状态st+1;如果任意一个分支选择了多余的一个神经元,则当前回合结束;(6)计算奖励:对于一个状态,目标函数为其中Ei代表列车在第i个区间的能耗,k1Ei代表能耗对目标函数的影响,能耗越低,奖励函数越大;pi代表第i个区间客流量的归一化值,δtri代表第i个区间的运行时间相对于原时刻表的调整量,k2piδtri表示乘客旅行体验对目标函数的影响,该项与乘客平均旅行时间变化量正相关,乘客平均旅行时间越短,该项越小,奖励函数越大;Fi是一个标志位,如果编号为i的站是换乘站或终点站,它的值为1,否则为0,该项代表运营管理对目标函数的影响,此处考虑换乘站的影响,列车到达换乘站的准时程度影响地铁公司的运营管理压力,列车到达换乘站越准时,该项越小,目标函数越大;目标函数三项前面的系数k1、k2、k3,表示三项在目标函数中的权重,根据实际情况设定,权重越大,该项对目标函数的影响就越大;函数的自变量δtr是由所有区间的运行时间调整量组成的一个向量,式子中的能耗数据E和客流数据p均经过了标准化处理;奖励值r为前后两个状态下,目标函数之差,即r=f(st+1)-f(st)检查新的状态st+1下目标函数值是否为历史最高值,如果是则将新状态和对应的目标函数值保存;(7)保存状态转换情况:将保存到经验重放区R内;(8)循环迭代:将更新后的状态st+1作为网络的输入,不断重复步骤(5)~(7),直至一个回合结束;(9)更新网络参数:使用步骤(1)中选择的深度强化学习算法更新网络参数,更新完成后,清空经验重放区;(10)开始下一回合:循环执行(4)~(9),直到达到终止回合数,结束训练;(11)输出结果:输出最高的目标函数值及其对应的状态。
- '{1.一种新型铝合金,其特征在于:其组成按重量百分比为,10.2-10.5%的硅、2.3-2.4%的铜、0.8-1.0%的镁、3.1-3.3%的镍、0.12-0.15%的钛、0.15-0.18%的钨、0.03-0.05%的硼、0.01-0.03%的锰、0.01-0.03%的钼、0.03-0.05%的锡,0.05-0.1%的锶,0.1-0.3%的锌、0.1-0.2%的钒、0.01-0.015%的锆,余量为铝。,2.根据权利要求1所述的新型铝合金,其特征在于:所述钛、钨、钒、锆分别以钛铝合金、钨铝合金、钒铝合金及锆铝合金的方式加入。}'
- '1.一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,包括能量管理平台、由多个海上油田平台构建的海上油田群组以及能量调用模块,所述能量管理平台与所述海上海上油田群组电性连接,所述能量管理平台还与所述能量调用模块电性连接,其中:
所述海上油田群组,用于采集海上油田平台的运行参数,将该运行参数进行处理后发送给能量管理平台;
所述能量管理平台,用于接收海上油田群组发送的运行参数,进行能量匹配建模,完成能量匹配任务,并将能量调用指令发送给能量调用模块;
所述能量调用模块,用于接收能量调用指令,向海上油田群组分配能量调用任务。
2.根据权利要求1所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述海上油田平台包括信号采集端、信号转化端、信号存储端和信号发送端,所述信号采集端与所述信号转化端电性连接,所述信号转化端和所述信号存储端电性连接,所述信号存储端与所述信号发送端电性连接。
3.根据权利要求2所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述信号采集端用于采集海上油田平台的运行参数,并将采集得到的运行参数发送给信号转化端,所述信号转化端接收所述信号采集端发送的运行参数,将该运行参数转化为数字信号后发送给信号存储端,信号存储端接收该数字信号并发送给信号发送端。
4.根据权利要求1所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述能量管理平台包括信号接收端、能量匹配端和电网信息获取端、指令传输端和反馈端,所述信息接收端和电网信息获取端均与能量匹配端电性连接,所述能量匹配端和所述指令传输端电性连接。
5.根据权利要求4所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述能量管理平台还包括有反馈端,所述反馈端与所述指令传输端电性连接。
6.根据权利要求1所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述能量调用模块包括有能量调度模块、指令接收模块和指令发送模块,所述能量调度模块分别与指令接收模块、指令发送模块电性连接。
7.根据权利要求6所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述能量调用模块还包括有互联网云端,所述互联网云端与所述能量调度模块电性连接。
8.根据权利要求1-7任一所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,还包括有储能模块,所述储能模块与所述能量管理平台电性连接,储能模块用于存储能量互联后剩余能量的存储。
9.根据权利要求8所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述储能模块包括有指令接收端和匹配单元,所述指令接收端与所述匹配单元电性连接,所述指令接收端用于接收能量调用模块发送的能量调用指令,并将该能量调用指令发送给匹配单元,由匹配单元进行能量匹配存储。
10.根据权利要求9所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述储能模块还包括有统计单元和显示单元,所述统计单元与所述显示单元电性连接。'
- source_sentence: '1.一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,包括以下步骤:
步骤1:实时获取列车速度、列车位置、线路坡度以及供电网络中各节点的电压、电流、功率;
步骤2:对步骤1实时获取的数据进行数据清洗处理后,保存至数据库中,
步骤3:将数据库中存储的数据划分为训练集和测试集,然后训练神经网络模型,得到供电网络模型;
所述神经网络模型是DNN模型,将列车速度、列车位置、线路坡度以及供电网络中各节点的电压、电流、功率中的一部分作为神经网络模型的输入,另一部分作为神经网络模型的输出;
步骤4:利用供电网络模型进行供电潮流实时计算,得到供电系统的潮流状态以及车辆功率消耗。
2.如权利要求1所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,列车速度和列车位置从电力监控系统获取;供电网络中各节点的电压、电流和功率从电力监控平台获取。
3.如权利要求1所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,在步骤2中,若步骤1实时获取的数据存在异常或缺失,该部分缺失、异常的数据通过供电模型进行代数运算得出。
4.如权利要求3所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,所述供电模型采用步骤3中训练好的供电网络模型;在供电网络模型训练好之前,采用统计学模型多重插补暂作替代。
5.如权利要求1所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,DNN模型共分三层:输入层、隐藏层和输出层,隐藏层的当前层的每个神经元都会接入前一层每个神经元的输入信号,在每个连接过程中,来自前一层的信号被乘以一个权重,增加一个偏置,然后通过一个非线性激活函数进行非线性转变,再利用损失函数计算预测值与真实值的误差,通过反向传播算法,将误差值反向传播至神经网络各层神经元,利用梯度下降算法对所有权重进行优化,并更新权值以最小化损失函数。
6.如权利要求1或5所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,非线性激活函数采用ReLU函数,损失函数采用均方误差函数。
7.如权利要求1所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,步骤3中,采用10次分层交叉验证法验证神经网络模型,对神经网络模型进行评估。
8.如权利要求7所述的一种基于轨道交通电力监控平台的供电潮流实时计算方法,其特征在于,根据神经网络模型的评估结果,对神经网络模型进行参数调整:增加隐藏层层数、增加隐藏层中神经元个数或调整损失函数中的惩罚系数。'
sentences:
- 1.一种供电自动切换方法,用于轨道交通中压环网供电系统,其特征在于,所述供电自动切换方法包括:获取中压环网供电系统中每个供电区的供电检测数据,其中,所述供电检测数据包括外供电进线电压、外供电进线断路器状态、外供电进线断路器电流、变电所母线电压、环网断路器状态、环网断路器电流、环网断路器对应的跳闸信号;根据所述供电检测数据识别故障供电区和故障类型,并根据故障类型确定采用的切换控制模式,其中,所述切换控制模式包括外供电故障切换模式和单环网内故障切换模式;根据所述切换控制模式对所述故障供电区的供电进行切换;其中,所述根据所述供电检测数据识别故障供电区和故障类型,并根据故障类型确定采用的切换控制模式,包括:若确定所述外供电进线电压小于进线电压阈值、所述外供电进线断路器处于合闸状态、所述外供电进线断路器电流小于电流阈值、所述变电所母线电压小于母线电压阈值,则对应供电区为所述故障供电区,并确定为外部供电故障,采用所述外供电故障切换模式;若确定所述环网断路器处于分闸状态、所述环网断路器电流小于电流阈值、检测到所述环网断路器对应的跳闸信号,则对应供电区为故障供电区,并确定为环网内部故障,采用所述单环网内故障切换模式。2.根据权利要求1所述的供电自动切换方法,其特征在于,所述根据所述切换控制模式对所述故障供电区的供电进行切换,包括:判断所述故障供电区的相邻供电区的外供电进线电压是否大于所述进线电压阈值,且所述相邻供电区的外供电进线断路器处于合闸状态;如果是,发送控制所述故障供电区的进线断路器的分闸控制信号,并发送断开所述故障供电区的均分开关的控制信号,以及,发送关闭所述故障供电区与所述相邻供电区之间的联络开关的控制信号。3.根据权利要求1所述的供电自动切换方法,其特征在于,所述根据所述切换控制模式对所述故障供电区的供电进行切换,包括:确定所述故障供电区中的子故障供电区,其中,所述故障供电区中变电所母线电压小于母线电压阈值的环网区域为所述子故障供电区;判断所述子故障供电区的相邻供电区的外供电进线电压是否大于进线电压阈值,且所述子故障供电区的相邻供电区的外供电进线断路器处于合闸状态;如果是,发送关闭所述子故障供电区与所述相邻供电区之间的联络开关的控制信号。4.根据权利要求1-3任一项所述的供电自动切换方法,其特征在于,所述供电自动切换方法还包括:接收切换反馈信号,并根据所述切换反馈信号进行切换状态提示。5.根据权利要求1-3任一项所述的供电自动切换方法,其特征在于,所述供电自动切换方法还包括:发送通讯故障检测信号,并接收通讯反馈信号,根据所述通讯反馈信号判断通讯是否故障。6.根据权利要求1所述的供电自动切换方法,其特征在于,所述供电检测数据包括外供电进线电压、变电所母线电压、断路器跳闸信号,所述供电自动切换方法还包括:确定同一供电区的所述外供电进线电压大于进线电压阈值、且所述变电所母线电压小于母线电压阈值、且未检测到所述断路器跳闸信号,识别为人工分闸操作。7.一种监控系统,其特征在于,包括:通讯装置,用于接收轨道交通中压环网供电系统每个供电区的供电检测数据;轨道交通电力监控系统,包括切换控制模块,所述切换控制模块用于执行如权利要求1-6任一项所述的供电自动切换方法。8.一种轨道交通中压环网系统,其特征在于,包括:供电系统,包括多个供电区,外供电引线通过进线断路器分别为每个所述供电区供电,相邻供电区之间设置联络开关,每个供电区包括多个变电所,每个变电所通过环网断路器连接;智能电子设备,用于检测所述供电系统中每个所述供电区的供电检测数据;如权利要求7所述的监控系统,所述监控系统与所述智能电子设备通过监控数据网络进行通讯。
- '{1.一种基于盐酸再生循环的废旧三元电池正极材料的资源化回收方法,其特征在于,所述方法包括以下步骤:,(1)将废旧三元电池正极材料进行活化,之后筛分,得到集流体和三元材料粉末;,(2)将步骤(1)中得到的三元材料粉末进行盐酸酸浸,固液分离,得到第一氯化物溶液;之后经脱铜、脱硅,固液分离,得到第二氯化物溶液;,(3)在步骤(2)中得到的第二氯化物溶液中加入氯化镍、氯化钴和氯化锰,得到第三氯化物溶液,所述氯化镍、氯化钴和氯化锰的加入量使得所得第三氯化物溶液中镍、钴和锰的摩尔比满足镍钴锰三元前驱体材料的要求;之后将所述第三氯化物溶液进行热分解,得到氧化镍、氧化钴和氧化锰的混合氧化物和HCl,所述HCl经吸收后得到盐酸,循环至步骤(2)中与补充的盐酸混合用于盐酸酸浸;,(4)将步骤(3)中所述氧化镍、氧化钴和氧化锰的混合氧化物进行水浸,固液分离、得到氯化锂溶液和氧化物滤饼,所述氧化物滤饼经煅烧,得到三元前驱体氧化物。,2.如权利要求1所述的方法,其特征在于,步骤(1)中所述活化的温度500-600℃;,优选地,步骤(1)所述活化的时间为60-90min。,3.如权利要求1或2所述的方法,其特征在于,步骤(2)中进行盐酸酸浸之前还包括将所述三元材料粉末进行粉碎;,优选地,所述粉碎的终点至颗粒目数≥200目;,优选地,步骤(2)中所述盐酸酸浸采用的盐酸的质量浓度为18-21%;,优选地,步骤(2)中所述盐酸酸浸的温度为75-85℃;,优选地,步骤(2)中所述盐酸酸浸用于将三元材料粉末中的镍、钴和锰转化为氯化镍、氯化钴和氯化锰,所述盐酸酸浸过程中盐酸的用量过量10-20%;,优选地,步骤(2)所述盐酸酸浸的过程中采用低压蒸汽加热;,优选地,所述低压蒸汽的温度为140-150℃,压力为0.4-0.5MPa;,优选地,步骤(2)所述脱铜的方法为还原脱铜;,优选地,步骤(2)所述脱铜的方法包括在第一氯化物溶液中加入铁粉;,优选地,所述铁粉的加入量使得溶液的pH为1-1.6;,优选地,步骤(2)所述脱硅为沉淀脱硅;,优选地,步骤(2)所述脱硅的方法包括在脱铜的溶液中加入氨水;,优选地,所述氨水的加入量使得溶液的pH为3-4。,4.如权利要求1-3任一项所述的方法,其特征在于,步骤(3)所述镍钴锰三元前驱体材料包括333型、523型或811型中的任意一种;,优选地,步骤(3)所述热分解的温度为450-550℃;,优选地,步骤(3)所述HCl经吸收前还包括除尘和降温。,5.如权利要求1-4任一项所述的方法,其特征在于,步骤(4)所述水浸的温度为80-95℃;,优选地,步骤(4)所述水浸过程中水的质量与所述氧化镍、氧化钴和氧化锰的混合氧化物的质量之比为(7-12):1;,优选地,步骤(4)所述水浸采用的加热介质为低压蒸汽,所述低压蒸汽的温度为140-150℃,压力为0.4-0.5MPa;,优选地,步骤(4)所述氯化锂溶液的质量浓度为10-15%;,优选地,步骤(4)所述煅烧的温度为500-600℃;,优选地,步骤(4)所述煅烧的时间为60-90min。,6.如权利要求1-5任一项所述的方法,其特征在于,所述方法还包括在步骤(4)中的氯化锂溶液中加入可溶性硫化物进行除杂,之后固液分离,加入碳酸钠溶液,固液分离,干燥,得到碳酸锂;,优选地,所述可溶性硫化物包括硫化钠和/或硫化铵;,优选地,所述可溶性硫化物的加入量使得氯化锂溶液中的镍、钴和锰完全沉淀;,优选地,所述可溶性硫化物的摩尔量与所述氯化锂溶液中的镍、钴和锰的摩尔量之和的比值为(1.05-1.2):1;,优选地,所述碳酸钠溶液的质量浓度为20-25%;,优选地,所述碳酸锂的干燥温度为150-180℃。,7.如权利要求1-6任一项所述的方法,其特征在于,所述方法包括以下步骤:,(1)将废旧三元电池正极材料在500-600℃下进行活化,之后筛分,得到集流体和三元材料粉末;,(2)将步骤(1)中得到的三元材料粉末进行粉碎至目数≥200目,之后在质量浓度为18-21%的盐酸溶液中进行盐酸酸浸,盐酸酸浸的温度为75-85℃,固液分离,得到第一氯化物溶液;在所述第一氯化物溶液中加入铁粉调节pH为1-1.6,加入氨水调节pH至3-4,固液分离,得到第二氯化物溶液;,(3)在步骤(2)中得到的第二氯化物溶液中加入氯化镍、氯化钴和氯化锰,得到第三氯化物溶液,所述氯化镍、氯化钴和氯化锰的加入量使得所得第三氯化物溶液中镍、钴和锰的摩尔比满足镍钴锰三元前驱体材料的要求;之后将所述第三氯化物溶液在450-550℃条件下进行热分解,得到氧化镍、氧化钴和氧化锰的混合氧化物和HCl,所述HCl经吸收后得到质量浓度为18-21%的盐酸,循环至步骤(2)中与补充的盐酸混合用于盐酸酸浸;,(4)将步骤(3)中所述氧化镍、氧化钴和氧化锰的混合氧化物进行水浸,所述水浸的过程中水的质量与混合氧化物的质量之比为(7-12):1,所述水浸的温度为80-95℃,固液分离、得到质量浓度为10-15%的氯化锂溶液和氧化物滤饼,所述氧化物滤饼在500-600℃下进行煅烧60-90min,得到三元前驱体氧化物;,(5)在步骤(4)中得到的氯化锂溶液中加入硫化钠,所述硫化钠的摩尔量与所述氯化锂溶液中镍、钴和锰的摩尔量之和的比值为(1.05-1.2):1,固液分离,之后在滤液中加入质量浓度为20-25%的碳酸钠溶液,固液分离,在150-180℃下干燥,得到碳酸锂。,8.一种基于盐酸再生循环的废旧三元电池正极材料的资源化回收系统,其特征在于,所述资源化回收系统包括高温炉、筛分机、球磨机、盐酸酸浸釜、酸浸渣压滤机、溶液调节槽、净化压滤机、氯化物配制槽、三元热解炉、旋风分离器、预浓缩器、盐酸吸收塔、水浸出釜、氧化物压滤机和三元煅烧炉,所述高温炉的出口连接所述筛分机的入口,所述筛分机的出口连接所述球磨机的入口,所述球磨机的出口连接所述盐酸酸浸釜的入口,所述盐酸酸浸釜的出口连接所述酸浸渣压滤机的入口,所述酸浸渣压滤机的液体出口连接溶液调节槽的入口,所述溶液调节槽的出口连接净化压滤机的入口,所述净化压滤机的液体出口连接所述氯化物配制槽的入口,所述氯化物配制槽的出口连接所述预浓缩器的液体入口,所述预浓缩器的液体出口连接所述三元热解炉的入口,所述三元热解炉的气体出口连接旋风分离器的入口,所述旋风分离器的气体出口连接所述预浓缩器的气体入口,所述预浓缩器的气体出口连接所述盐酸吸收塔的气体入口,所述盐酸吸收塔的液体出口连接盐酸酸浸釜的液体入口,所述三元热解炉的固体出口连接所述水浸出釜的入口,所述水浸出釜的出口连接所述氧化物压滤机的入口,所述氧化物压滤机的固体出口连接所述三元煅烧炉的入口。,9.如权利要求8所述的资源化回收系统,其特征在于,所述球磨机的出口和所述盐酸酸浸釜的入口之间设置有螺旋输送机,所述螺旋输送机的入口和出口分别连接所述球磨机的出口和所述盐酸酸浸釜的入口;,优选地,所述盐酸酸浸釜的出口与所述酸浸渣压滤机的入口之间设置有酸浸釜出料泵,所述酸浸釜出料泵的入口和出口分别连接所述盐酸酸浸釜的出口和所述酸浸渣压滤机的入口;,优选地,所述溶液调节槽上设置有铁粉加入口和氨水加入口;,优选地,所述溶液调节槽的出口和所述净化压滤机的入口之间设置有调节槽出料泵,所述调节槽出料泵的入口和出口分别连接所述溶液调节槽的出口和所述净化压滤机的入口;,优选地,所述系统还包括氯化物溶解槽,所述氯化物溶解槽的出口连接所述氯化物配制槽的入口;,优选地,所述氯化物溶解槽的出口和所述氯化物配制槽的入口之间设置有精密过滤机,所述精密过滤机的入口和出口分别连接所述氯化物溶解槽的出口和所述氯化物配制槽的入口;,优选地,所述氯化物溶解槽的出口和所述精密过滤机的入口之间设置有溶解槽出料泵,所述溶解槽出料泵的入口和出口分别连接所述氯化物溶解槽的出口和所述精密过滤机的入口;,优选地,所述氯化物配制槽的出口和所述预浓缩器的入口之间设置有氯化物配制泵,所述氯化物配制泵的入口和出口分别连接所述氯化物配制槽的出口和所述预浓缩器的入口;,优选地,所述预浓缩器的液体出口和所述三元热解炉的入口之间设置有预浓缩器循环泵,所述预浓缩器的液体出口连接所述预浓缩器循环泵的入口,所述预浓缩器循环泵的出口连接所述三元热解炉的入口和所述预浓缩器的顶部入口;,优选地,所述预浓缩器循环泵的出口和所述三元热解炉的入口之间设置有喷雾热解泵,所述喷雾热解泵的入口和出口分别连接所述预浓缩器循环泵的出口和所述三元热解炉的入口;,优选地,所述盐酸吸收塔的液体出口和所述盐酸酸浸釜的液体入口之间设置有盐酸泵,所述盐酸泵的入口和出口分别连接所述盐酸吸收塔的液体出口和所述盐酸酸浸釜的液体入口;,优选地,所述三元热解炉的固体出口和所述水浸出釜的入口之间设置有粉末收集器;所述粉末收集器的入口和出口分别连接所述三元热解炉的固体出口和所述水浸出釜的入口;,优选地,所述系统还包括尾气净化塔,所述盐酸吸收塔的气体出口连接所述尾气净化塔的气体入口;,优选地,所述盐酸吸收塔的气体出口和所述尾气净化塔的气体入口之间设置有耐酸尾气风机,所述耐酸尾气风机的入口和出口分别连接所述盐酸吸收塔的气体出口和所述尾气净化塔的气体入口;,优选地,所述尾气净化塔的液体出口设置有净化塔循环泵,所述净化塔循环泵的入口连接所述尾气净化塔的液体出口,所述净化塔循环泵的出口连接所述尾气净化塔的液体入口和所述盐酸吸收塔的液体入口;,优选地,所述系统还包括氯化锂净化釜、硫化物过滤机、碳酸锂合成釜、碳酸锂过滤机和碳酸锂干燥机,所述氯化锂净化釜的入口连接所述氧化物压滤机的液体出口,所述氯化锂净化釜的出口连接所述硫化物过滤机的入口,所述硫化物过滤机的液体出口连接所述碳酸锂合成釜的入口,所述碳酸锂合成釜的出口连接所述碳酸锂过滤机的入口,所述碳酸锂过滤机的固体出口连接所述碳酸锂干燥机的入口;,优选地,所述氯化锂净化釜的出口和所述硫化物过滤机的入口之间设置有氯化锂净化泵,所述氯化锂净化泵的入口和出口分别连接所述氯化锂净化釜的出口和所述硫化物过滤机的入口;,优选地,所述碳酸锂合成釜的出口和所述碳酸锂过滤机的入口之间设置有合成釜出料泵,所述合成釜出料泵的入口和出口分别连接所述碳酸锂合成釜的出口和所述碳酸锂过滤机的入口。,10.如权利要求8或9所述的资源化回收系统,其特征在于,所述高温炉为电加热或天然气加热设备;,优选地,所述高温炉的炉型为厢式炉或回转炉;,优选地,所述盐酸酸浸釜的材质为耐盐酸材质;,优选地,所述盐酸酸浸釜的内衬的材质为搪玻璃或石墨内衬;,优选地,所述盐酸酸浸釜带有夹套;,优选地,所述盐酸酸浸釜以低压蒸汽为热源;,优选地,所述三元热解炉的炉型为箱式炉或回转炉;,优选地,所述三元热解炉采用底部加热;,优选地,所述三元热解炉采用底部出料方式;,优选地,所述三元热解炉为直接加热设备;,优选地,所述三元热解炉由耐酸耐火材料砌筑形成;,优选地,所述三元热解炉为电加热或天然气加热设备;,优选地,所述三元热解炉以天然气为燃料;,优选地,所述旋风分离器的固体出口连接所述三元热解炉的中部,用于将固体粉料输入热解炉中;,优选地,所述水浸出釜用于将氯化锂由氧化镍、氧化钴和氧化锰的混合氧化物中浸出;,优选地,所述水浸出釜的材质为耐氯离子合金;,优选地,所述水浸出釜为夹套结构,以低压蒸汽为热源;,优选地,所述硫化物过滤机为管式或篮式压滤机。}'
- 1.一种基于深度强化学习的城市轨道交通列车时刻表优化方法,其特征在于,包括以下步骤:步骤1,建立基本数据模块,包括线路数据模块、列车运行数据模块、地铁运营数据模块、优化参数模块;步骤2,建立列车牵引能耗计算模块,包括神经网络能耗拟合模块与时间-能耗曲线拟合模块;步骤3,使用神经网络能耗拟合模块,将线路数据和列车速度数据作为输入量,使用实测的能耗数据作为期望输出量,通过调节网络参数取值,使误差沿梯度方向下降,经过反复学习训练,确定与最小误差相对应的网络参数;步骤4,使用时间-能耗曲线拟合模块,用实测速度曲线和训练后的网络,对对应的能耗进行拟合,并获得时间与能耗的关系曲线;步骤5,使用列车区间运行时间优化模块,采用深度强化学习算法,综合考虑列车全线能耗、乘客旅行体验和运营管理要求,设计目标函数,通过调整各个区间的运行时间,最大化该目标函数的值;步骤3所述神经网络能耗拟合模块,使用的列车实测速度、牵引电流、辅变电流、制动电阻电流均为间隔为0.1s的离散的点,对于每个时刻,输入量为当前时刻及前后各10个时刻的速度值、列车当前位置的坡道参数、列车当前位置的弯道参数,期望输出量为列车在该时刻的功率,利用误差反向传播算法对网络的参数进行更新,具体步骤如下:(1)确定网络参数:包括网络层数、每层神经元个数、激活函数种类;(2)确定训练参数:包括参数的更新方法、更新步长、终止条件;(3)计算列车时间-位置曲线:根据实测的列车速度曲线,将速度对时间进行积分运算,得到列车的时间-位置曲线;(4)计算每个时刻列车所处位置的线路参数:根据列车时间-位置曲线,以0.1s为间隔,获得列车在每个时刻的位置,查表获得该位置的坡道参数和弯道参数;(5)计算每个时刻列车的功率:根据实测的网压u、牵引电流idr、辅变电流iaux,以0.1s为间隔,计算列车在每个时刻的功率p,计算方法如下:p=u(ndridr-nauxiaux)其中ndr为列车上的牵引变压器数量,naux为列车上的辅助变压器数量;(6)训练网络:以一个时刻前后各10个时刻的速度值、该时刻列车所在位置的坡度、该时刻列车所在位置的曲率半径、该时刻的功率作为一组数据,每次将多组数据作为一个小批量,将速度、坡度、曲率半径作为输入,将功率作为期望输出值,使用均方差作为损失函数,并进行误差的反向传播;不断训练,直至终止条件达成;步骤4所述时间-能耗曲线拟合模块,利用神经网络能耗拟合模块所训练出的网络参数,将不同的速度曲线作为网络的输入,计算对应的能耗值,将时间和能耗的关系绘制在二维坐标系上,得到时间与能耗的关系曲线,具体步骤如下:(1)计算列车时间-位置曲线:根据实测的列车速度曲线,将速度对时间进行积分运算,得到列车的时间-位置曲线;(2)计算每个时刻列车所处位置的线路参数:根据列车时间-位置曲线,以0.1s为间隔,获得列车在每个时刻的位置,查表获得该位置的坡道参数和弯道参数;(3)预测功率:以一个时刻前后各10个时刻的速度值、该时刻列车所在位置的坡度、该时刻列车所在位置的曲率半径、该时刻的功率作为一组数据,作为网络的输入,得到该时刻的功率预测值;(4)计算能耗:将列车在一个区间的功率预测值对时间积分,得到列车在这个区间的能耗;(5)绘制时间-能耗曲线:对一个区间的多条速度曲线进行以上(1)~(4)步骤的操作,每条速度曲线都对应一个运行时间和拟合能耗,将运行时间和拟合能耗的关系绘制在二维坐标系上,得到时间与能耗的关系曲线。2.根据权利要求1所述的基于深度强化学习的城市轨道交通列车时刻表优化方法,其特征在于,步骤1所述的基本数据模块包括线路数据模块、列车运行数据模块、地铁运营数据模块、优化参数模块,该四个模块均为数据输入模块,为列车牵引能耗计算模块和列车区间运行时间优化模块提供初始参数,其中:线路数据模块,分为车站数据、坡道数据、弯道数据;列车运行数据模块,提供列车运行时的实测数据,包括列车速度、牵引电流、辅变电流;地铁运营数据模块,提供列车每个运行区间的客流、列车原始的时刻表和换乘站数据;优化参数模块,用于神经网络能耗拟合的参数设置,包括神经网络层数、每层神经元个数、激活函数种类、迭代次数;还用于深度强化学习算法的参数设置,包括深度强化学习算法种类、神经网络层数、每层神经元个数、激活函数种类、迭代次数、奖励函数各个组成部分的比重及所选算法对应的超参数。3.根据权利要求1所述的基于深度强化学习的城市轨道交通列车时刻表优化方法,其特征在于,步骤2所述的建立列车牵引能耗计算模块,包括神经网络能耗拟合模块与时间-能耗曲线拟合模块,其中:神经网络能耗拟合模块:利用线路数据、列车实测速度、实测能耗对神经网络进行训练,更新网络参数,获得能耗拟合模型;时间-能耗曲线拟合模块:将更多的实测速度曲线作为训练后的神经网络的输入,计算列车区间运行能耗,获得时间-能耗曲线。4.根据权利要求1所述的基于深度强化学习的城市轨道交通列车时刻表优化方法,其特征在于,步骤5所述列车区间运行时间优化模块,在时间-能耗曲线拟合模块求解的基础上,采用深度强化学习算法综合考虑列车全线能耗、乘客旅行体验和运营管理要求,设计目标函数,通过调整各个区间的运行时间,最大化该目标函数的值,具体步骤如下:(1)选择算法:选择深度强化学习中基于策略的方法中的一种,包括策略梯度VPG、优势行动器-评判器A2C、近端策略优化PPO;(2)建立网络:使用的神经网络有两个,一个为行动网络,用来确定一个状态下,应该增加运行时间的区间和应该减少运行时间的区间;另一个为评判网络,用来估算一个状态的价值;VPG仅使用了行动网络,而A2C和PPO使用了行动网络和评判网络;如果轨道交通线路全线区间数为n,则行动网络的输入神经元数量为n,输出分为增分支和减分支,每个分支的输出神经元数量为n+1;评判网络的输入神经元数量为n,输出神经元数量为1;行动网络的两个输出分支之前均使用softmax函数,使得输出神经元输出值之和为1;其余部分每两层之间使用ReLU作为激活函数;(3)初始化网络参数和训练相关参数:初始化行动网络的参数,包括公共部分的参数θ、确定增加运行时间的区间的分支参数αinc和确定减少运行时间的区间的分支参数αdec;初始化评判网络的参数φ;初始化经验重放区R;初始化迭代回合数;(4)开始一个回合:将长度为n的零向量作为初始状态st,输入给行动网络;向量中的每个值代表一个区间的运行时间相对原时刻表的变化量,单位为秒;初始状态为零向量,即列车按原时刻表运行;(5)网络前向传播:根据网络的参数,计算行动网络两个分支的输出;以每个输出神经元的输出量作为概率,从增分支和减分支各选择一个神经元,分别代表应该增加运行时间的区间编号应该减少运行时间的区间编号以输入的向量为基准,对这两个区间的运行时间分别增加和减少1秒,获得新的状态st+1;如果任意一个分支选择了多余的一个神经元,则当前回合结束;(6)计算奖励:对于一个状态,目标函数为其中Ei代表列车在第i个区间的能耗,k1Ei代表能耗对目标函数的影响,能耗越低,奖励函数越大;pi代表第i个区间客流量的归一化值,δtri代表第i个区间的运行时间相对于原时刻表的调整量,k2piδtri表示乘客旅行体验对目标函数的影响,该项与乘客平均旅行时间变化量正相关,乘客平均旅行时间越短,该项越小,奖励函数越大;Fi是一个标志位,如果编号为i的站是换乘站或终点站,它的值为1,否则为0,该项代表运营管理对目标函数的影响,此处考虑换乘站的影响,列车到达换乘站的准时程度影响地铁公司的运营管理压力,列车到达换乘站越准时,该项越小,目标函数越大;目标函数三项前面的系数k1、k2、k3,表示三项在目标函数中的权重,根据实际情况设定,权重越大,该项对目标函数的影响就越大;函数的自变量δtr是由所有区间的运行时间调整量组成的一个向量,式子中的能耗数据E和客流数据p均经过了标准化处理;奖励值r为前后两个状态下,目标函数之差,即r=f(st+1)-f(st)检查新的状态st+1下目标函数值是否为历史最高值,如果是则将新状态和对应的目标函数值保存;(7)保存状态转换情况:将保存到经验重放区R内;(8)循环迭代:将更新后的状态st+1作为网络的输入,不断重复步骤(5)~(7),直至一个回合结束;(9)更新网络参数:使用步骤(1)中选择的深度强化学习算法更新网络参数,更新完成后,清空经验重放区;(10)开始下一回合:循环执行(4)~(9),直到达到终止回合数,结束训练;(11)输出结果:输出最高的目标函数值及其对应的状态。
- source_sentence: '1.一种智能供电系统,其特征在于,包括用户终端、区域分布模块、预警分级模块、运行监测模块、数据采集模块、环境监测模块、供电管控模块以及服务器,所述区域分布模块用于将供电区域进行划分,将供电区域划分为若干个监测区域,并将监测区域标记为u,u=1,2,……,z,z为正整数,所述区域分布模块将划分后的监测区域发送至服务器,所述服务器将划分后的监测区域分别发送至运行监测模块和环境监测模块;所述数据采集模块用于采集监测区域内供电系统的实时运行数据和实时环境数据并发送至服务器,所述服务器将实时运行数据发送至运行监测模块、实时环境数据发送至环境监测模块;
所述服务器中存储有供电系统的标准运行数据和标准环境数据,将标准运行数据发送至运行监测模块,将标准环境数据发送至环境监测模块;所述运行监测模块用于对监测区域内供电系统的运行状况进行监测,监测得到监测区域内供电系统的运行偏离系数反馈至服务器;所述环境监测模块用于对监测区域内供电系统的环境状况进行监测,监测得到监测区域内供电系统的环境偏差系数反馈至服务器,所述服务器将运行偏离系数和环境偏差系数发送至预警分级模块,并将监测区域的运行偏离系数标记为YXu、环境偏差系数标记为HXu;
预警分级模块依据监测区域的运行偏离系数和环境偏差系数对监测区域进行预警分级,生成高危预警信号、中危预警信号和低危预警信号反馈至服务器,服务器依据工作人员负责的供电区域将高危预警信号和中危预警信号发送至对应的用户终端,工作人员前往高危预警信号和中危预警信号对应的监测区域,同时,所述服务器将高危预警信号、中危预警信号和低危预警信号发送至供电管控模块,供电管控模块依据信号为监测区域设定对应的管控措施。
2.根据权利要求1所述的一种智能供电系统,其特征在于,所述服务器连接有若干个用户终端,所述用户终端用于工作人员输入个人信息后注册登录系统,并将个人信息发送至服务器内存储。
3.根据权利要求2所述的一种智能供电系统,其特征在于,个人信息包括人员姓名、实名认证的手机号码、入职时间、个人照片和负责的供电区域;
实时运行数据为供电系统的实时电流值和实时电压值;
实时环境数据为环境温度值、环境降雨量和环境风力值;
标准运行数据包括标准电流变化速率、标准电压变化速率,标准环境数据包括标准温度值、标准湿度值和标准风力值。
4.根据权利要求3所述的一种智能供电系统,其特征在于,所述运行监测模块的监测过程具体如下:
步骤一:设定供电系统的监测时段,并在监测时段中设定三组时间点Tui,i=1,2,3,i代表时间点的编号;
步骤二:获取在监测时段中各个时间点时监测区域的实时电流值LTui和实时电压值YTui;
步骤三:结合公式计算得到在监测时段中监测区域内供电系统的实时电流变化速率LBSu;
同理,结合公式计算得到在监测时段中监测区域内供电系统的实时电压变化速率YBSu;
步骤四:获取监测区域内供电系统对应的标准电流变化速率和标准电压变化速率,将标准电流变化速率与实时电流变化速率进行比对、标准电压变化速率与实时电压变化速率进行比对,得到监测区域内供电系统的电流变化速率差CLBSu和电压变化速率差CYBSu;
步骤五:将电流变化速率差CLBSu和电压变化速率差CYBSu代入计算式YPu=CLBSu×a1+CYBSu×a2计算得到监测区域内供电系统的运行偏离值YPu;式中,a1和a2均为固定数值的权重系数,且a1和a2的取值均大于零;
步骤六:若YPu<X1,则监测区域内供电系统的运行偏离系数为α1;
若X1≤YPu<X2,则监测区域内供电系统的运行偏离系数为α2;
若X2≤YPu,则监测区域内供电系统的运行偏离系数为α3;其中,X1和X2均为运行偏离阈值,且X1<X2。
5.根据权利要求4所述的一种智能供电系统,其特征在于,运行偏离系数α1的取值小于运行偏离系数α2的取值,运行偏离系数α2的取值小于运行偏离系数α3的取值。
6.根据权利要求5所述的一种智能供电系统,其特征在于,所述环境监测模块的监测过程具体如下:
步骤S1:获取监测区域内供电系统在各个时间点时的环境温度值WTui、环境降雨量STui和环境风力值FTui;
步骤S2:各个时间点时的环境温度值比对标准温度值,取绝对值后得到三组温度差值CWTui,三组温度差值相加求和取平均值得到监测区域内供电系统的环境温差值JCWTu;
同理,得到监测区域内供电系统的环境湿差值JCSTu和环境风力差值JCFTu;
步骤S3:将环境温差值JCWTu、环境湿差值JCSTu和环境风力差值JCFTu代入计算式HPu=JCWTu×c1+JCSTu×c2+JCFTu×c3计算得到监测区域内供电系统的环境偏差值HPu;式中,c1、c2和c3均为固定数值的比例系数,且c1、c2和c3的取值均大于零;
步骤S4:若YPu<Y1,则监测区域内供电系统的环境偏差系数为β1;
若Y1≤YPu<Y2,则监测区域内供电系统的环境偏差系数为β2;
若Y2≤YPu,则监测区域内供电系统的环境偏差系数为β3;其中,Y1和Y2均为环境偏差阈值,且Y1<Y2。
7.根据权利要求6所述的一种智能供电系统,其特征在于,环境偏差系数β1的取值小于环境偏差系数β2的取值,环境偏差系数β2的取值小于环境偏差系数β3的取值。
8.根据权利要求7所述的一种智能供电系统,其特征在于,所述预警分级模块的工作过程具体如下:
步骤SS1:若YXu≥M1且HXu≥N1,则判定监测区域为供电高危区域,生成高危预警信号;
步骤SS2:若YXu<M1且HXu≥N1或YXu≥M1且HXu<N1,则判定监测区域为供电中危区域,生成中危预警信号;
步骤SS3:若YXu<M1且HXu<N1,则判定监测区域为供电低危区域,生成低危预警信号;式中,M1与YXu相对应,N1与HXu相对应,M1为运行偏离系数的预设值,N1为环境偏差系数的预设值,且M1和N1均为固定数值。
9.根据权利要求8所述的一种智能供电系统,其特征在于,管控措施具体为:
若监测区域为供电高危区域,设定全天二十四小时的监测时段、增设监测点和监测站、工作人员定期对供电系统进行维护;
若监测区域为供电中危区域,设定每间隔周期为两小时的监测时段、增设监测点和监测站;
若监测区域为供电低危区域,设定每间隔周期为四小时的监测时段。'
sentences:
- '1.一种直流充电设备的功率分配系统,其特征在于:它包括至少两组充电模块(2)与充电终端(4)和功率分配装置(3),所述功率分配装置(3)分别连接于所述充电模块(2)和充电终端(4)之间,所述充电模块(2)与电网(1)连接;所述功率分配装置(3)包括多个功率分配单元,每个功率分配单元包括输入端口(31)、第一开关模块(32)和输出端口(34),每组充电模块(2)连接一个输入端口(31),所述输出端口(34)连接充电终端(4),所述输入端口(31)和输出端口(34)之间串联一组第一开关模块(32)。
2.根据权利要求1所述的一种直流充电设备的功率分配系统,其特征在于:所述充电终端(4)的数量不大于充电模块(2)的数量。
3.根据权利要求1所述的一种直流充电设备的功率分配系统,其特征在于:每个输入端口(31)和对应的输出端口(34)组成一个输出回路,任意两个输出回路之间连接一组第二开关模块(33),若有N个输出回路,则有组第二开关模块(33)用于连接所有的输出回路,N组第一开关模块(32)用于闭合输出回路。
'
- '1.一种基于数据监测的断路器运行检测系统,其特征在于,包括用户终端、检测核验模块、数据采集模块、运行监测模块、大数据模块、检测评定模块、环境监测模块以及服务器,所述数据采集模块用于采集断路器的运行数据和环境数据,并将运行数据和环境数据发送至服务器;所述用户终端用于工作人员输入断路器的型号和对应的检测评定阈值,并将断路器的型号和对应的检测评定阈值发送至服务器,所述服务器将断路器的型号发送至大数据模块,大数据模块依据型号获取断路器的标准运行数据、标准环境数据和检测评定数据,并将断路器的标准运行数据、标准环境数据和检测评定数据反馈至服务器;
所述服务器将断路器的环境数据发送至环境监测模块,所述环境监测模块用于对断路器的环境数据进行监测,监测得到断路器的环境偏差和对应的环境偏差系数并反馈至服务器;所述服务器将断路器的运行数据发送至运行监测模块,所述运行监测模块用于对断路器的运行数据进行监测,监测得到断路器的运行偏差等级和对应的运行偏差系数并反馈至服务器,所述服务器将断路器的运行偏差等级和对应的运行偏差系数、环境偏差等级和对应的环境偏差系数发送至检测评定模块,所述检测评定模块用于对断路器的运行情况进行检测评定,生成运行正常信号或运行异常信号并反馈至服务器,所述服务器将接收到的运行正常信号或运行异常信号反馈至用户终端;
在接收到运行异常信号时,所述检测核验模块用于对断路器的异常运行情况进行核验,生成检测准确信号或检测偏差信号并反馈至服务器,所述服务器将检测准确信号或检测偏差信号发送至用户终端。
2.根据权利要求1所述的一种基于数据监测的断路器运行检测系统,其特征在于,运行数据包括断路器的运行温度值、运行振幅值、运行电压值、运行电流值;环境数据包括断路器所在地的环境温度值、环境湿度值、环境风力值;
标准运行数据包括断路器的标准温度变化速率、标准电压变化速率、运行温度阈值、运行湿度阈值、运行电压值和运行电流值,标准环境数据包括断路器的环境温度阈值、环境湿度阈值和环境风力阈值,检测评定数据包括断路器的检测评定值。
3.根据权利要求1所述的一种基于数据监测的断路器运行检测系统,其特征在于,所述环境监测模块的监测过程具体如下:
步骤一:将断路器标记为u,u=1,2,……,z,z为正整数;获取断路器所在地未来十五天的天气预报,通过天气预报得到断路器所在地未来十五天的环境温度值、环境湿度值和环境风力值;
步骤二:将断路器所在地未来十五天的环境温度值、环境湿度值、环境风力值取平均值,得到断路器所在地未来十五天的环境温度均值JWDu、环境湿度均值JSDu和环境风力均值JFLu;
步骤三:获取断路器所在地的海拔,依据断路器所在地海拔得到对应的环境温度阈值YHWDu、环境湿度阈值YHSDu和环境风力阈值YHFLu;
步骤四:利用公式WCu=|JWDu-YHWDu|计算得到断路器环境温度均值与环境温度阈值的差值得到环境温度差值WCu;
同理,计算得到断路器的环境湿度差值SCu和环境风力差值FCu;
步骤五:将环境温度差值WCu、环境湿度差值SCu和环境风力差值FCu代入计算式计算得到断路器的环境偏差值HPu,公式具体如下:
式中,a1、a2和a3均为固定数值的比例系数,且a1、a2和a3的取值均大于零;
步骤六:若HPu<X1,则断路器的环境偏差等级为第三环境偏差等级,并设定对应的环境偏差系数;
若X1≤HPu<X2,则断路器的环境偏差等级为第二环境偏差等级,并设定对应的环境偏差系数;
若X1≤HPu,则断路器的环境偏差等级为第一环境偏差等级,并设定对应的环境偏差系数;式中,X1和X2均为环境偏差阈值,且X1<X2。
4.根据权利要求3所述的一种基于数据监测的断路器运行检测系统,其特征在于,第三环境偏差等级的环境偏差系数小于第二环境偏差等级的环境偏差系数,第二环境偏差等级的环境偏差系数小于第一环境偏差等级的环境偏差系数。
5.根据权利要求1所述的一种基于数据监测的断路器运行检测系统,其特征在于,所述运行监测模块的监测过程具体如下:
步骤S1:获取断路器前一天的运行数据,得到运行数据中的运行温度值和运行电压值;
步骤S2:在前一天中设定任意的三个时间点t1、t2和t3,获取在三个时间时断路器对应的运行温度值和运行电压值,分别标记为WDut1、WDut2、WDut3、DYut1、DYut2、DYut3;其中,t1<t2<t3;
步骤S3:利用公式计算时间点t1至时间点t2之间断路器的温度变化速率WBS1u,利用公式时间点t2至时间点t3之间断路器的温度变化速率WBS2u;
同理,结合计算时间点t1至时间点t2之间断路器的电压变化速率DBS1u、时间点t2至时间点t3之间断路器的电压变化速率DBS2u;
步骤S4:温度变化速率WBS1u与温度变化速率WBS2u计算均值得到断路器的温度变化均速率WBJSu,电压变化速率DBS1u与电压变化速率DBS2u计算均值得到断路器的电压变化均速率DBJSu;
步骤S5:获取断路器对应的标准温度变化速率WBSu和标准电压变化速率DBSu,计算标准温度变化速率与温度变化均速率的差值得到断路器的温度变化速率差WBSCu,计算标准电压变化速率与电压变化均速率的差值得到断路器的电压变化速率差DBSCu;
步骤S6:结合公式YPu=WBSCu×b1+DBSCu×b2计算得到断路器的运行偏差值YPu;式中,b1和b2均为固定数值的权重系数,且b1和b2的取值均大于零;
步骤S7:若YPu<Y1,则断路器的运行偏差等级为第三运行偏差等级,并设定对应的运行偏差系数;
若Y1≤YPu<Y2,则断路器的运行偏差等级为第二运行偏差等级,并设定对应的运行偏差系数;
若Y2≤YPu,则断路器的运行偏差等级为第一运行偏差等级,并设定对应的运行偏差系数;式中,Y1和Y2均为运行偏差阈值,且Y1<Y2。
6.根据权利要求5所述的一种基于数据监测的断路器运行检测系统,其特征在于,第三运行偏差等级的运行偏差系数小于第二运行偏差等级的运行偏差系数,第二运行偏差等级的运行偏差系数小于第一运行偏差等级的运行偏差系数。
7.根据权利要求1所述的一种基于数据监测的断路器运行检测系统,其特征在于,所述检测评定模块的工作过程具体如下:
步骤SS1:获取断路器的运行偏差系数和运行偏差系数,分别标记为PC1u和PC2u;
步骤SS2:获取断路器的故障次数,并将故障次数标记为GZu;获取断路器上一次故障与本次故障的间距时长,每个间隔时长计算均值后得到断路器的正常运行时长ZYTu;
步骤SS3:结合公式计算得到断路器的检测评定值JPu;式中,α和β均为固定数值,且α和β的取值均大于零;
步骤SS4:获取断路器的检测评定阈值YJPu,计算检测评定阈值与检测评定值之间的差值得到断路器的检测评定差值JPCu;
步骤SS5:若断路器的检测评定差值JPCu在误差范围内,则生成运行正常信号;
若断路器的检测评定差值JPCu不在误差范围内,则生成运行异常信号。
8.根据权利要求7所述的一种基于数据监测的断路器运行检测系统,其特征在于,所述检测核验模块的核验过程具体如下:
步骤P1:获取断路器的检测评定次数,并将检测评定次数标记为CSu;
步骤P2:获取断路器每次检测评定时的检测评定值JPui,i=1,2,……,x,x为正整数,i为检测评定次数的编号,利用公式计算得到断路器的检测评定均值JJPu;
步骤P3:将检测评定均值作为断路器的检测核验值,结合公式计算当前断路器的检测评定值与检测核验值的差值并记为检测核验差值HYCu;
步骤P4:若当前断路器的检测核验差值HYCu在误差范围内,则生成检测准确信号,若当前断路器的检测核验差值HYCu在误差范围内,则生成检测偏差信号。'
- '{1.基于弹簧零位基准和激光自准直测量的弹簧隔振平台,其特征在于弹簧隔振平台台体(6)配置在3个或3个以上均匀分布的隔振器(4)上,隔振器(4)配置在基座(5)上,所述隔振器(4)由隔振器基座(4a)、隔振器支架(4b)和隔振器工作台(4c)构成,隔振器工作台(4c)安装在隔振器基座(4a)内,隔振器支架(4b)配置在隔振器基座(4a)外侧部上,在各个隔振器(4)与弹簧隔振平台台体(6)之间配置水平位移执行器(8),所述的水平位移执行器(8)采用水平放置的直线型音圈电机,水平位移执行器(8)的直线型音圈电机动子(8a)与弹簧隔振平台台体(6)固连,水平位移执行器(8)的直线型音圈电机定子(8b)配置在隔振器支架(4b)上;测量弹簧隔振平台台体(6)六自由度姿态的激光位置测量光路由He-Ne激光器(1)、激光自准直系统(2)、零位基准装置(3)、台体姿态光电检测器(7)、台体姿态分光棱镜(10)构成,其中台体姿态光电检测器(7)、台体姿态分光棱镜(10)固装在弹簧隔振平台台体(6)下端面上,所述的台体姿态分光棱镜(10)包括第一分光棱镜(10a)、第二分光棱镜(10b)、第三分光棱镜(10c)和第四分光棱镜(10d),且第一分光棱镜(10a)位于激光自准直系统(2)的透射激光光路上,第二分光棱镜(10b)位于第一分光棱镜(10a)的透射光路上,第三分光棱镜(10c)位于第一分光棱镜(10a)的反射光路上,第四分光棱镜(10d)位于第三分光棱镜(10c)的反射光路上;所述的台体姿态光电检测器(7)包括第一光电检测器(7a)、第二光电检测器(7b)、第三光电检测器(7c)和第四光电检测器(7d),其中第一光电检测器(7a)位于第二分光棱镜(10b)的透射光路上,第二光电检测器(7b)位于第二分光棱镜(10b)的反射光路上,第三光电检测器(7c)位于第四分光棱镜(10d)的透射光路上,第四光电检测器(7d)位于第四分光棱镜(10d)的反射光路上;所述的激光自准直系统(2)由激光扩束准直系统(11)、凸透镜(12)、平漂与角漂检测光电检测器(13)、光束调整机构(14)、平漂与角漂检测分光棱镜(9)构成,其中光束调整机构(14)位于激光扩束准直系统(11)和平漂与角漂检测分光棱镜(9)之间,凸透镜(12)位于平漂与角漂检测分光棱镜(9)和平漂与角漂检测光电检测器(13)之间,光束调整机构(14)包括可调整相对位置间距和角度的楔角棱镜A(14a)、楔角棱镜B(14b);所述的零位基准装置(3)包括零位基准光电检测器安装平台(3a)和固有频率低于0.5Hz的被动减振器(3b),零位基准光电检测器安装平台(3a)通过被动减振器(3b)安装在基座(5)上,并位于激光自准直系统(2)的下侧折射光路上;由平漂光电检测器(13a)和角漂光电检测器(13b)构成的平漂与角漂检测光电检测器(13)固装在零位基准装置(3)的零位基准光电检测器安装平台(3a)上,平漂、角漂光电检测器(13a、13b)接收面分别与各自运动方向水平,且接收面中心与对应光束中心重合。,2.根据权利要求1所述的基于弹簧零位基准和激光自准直测量的弹簧隔振平台,其特征在于所述的台体姿态光电检测器(7)和平漂与角漂检测光电检测器(13)包括位置敏感器件PSD、图像传感器CCD、四象限探测器QPD和硅光电池。,3.根据权利要求1所述的基于弹簧零位基准和激光自准直测量的弹簧隔振平台,其特征在于所述的被动减振器(3b)采用弹簧结构,且被动减振器(3b)为零刚度减振器。}'
- source_sentence: '1.一种基于三相逆变电源装置的轨道交通供电系统,包括牵引供电系统和低压供配电系统,其特征是:从地方电网引入两回外部进线电源(1),通过一个中压供电环网(3)向工程沿线若干个牵引降压混合变电所(B1)和降压变电所(B2)供电,该两回外部进线电源(1)分别通过变电所中压开关(2)引入变电所中压I段母线(21),该中压供电环网(3)通过变电所中压开关(2)与工程沿线的牵引降压混合变电所(B1)和降压变电所(B2)的变电所中压I段母线(21)相连。
2.如权利要求1所述一种基于三相逆变电源装置的轨道交通供电系统,其特征是:所述牵引降压混合变电所(B1)内,变电所中压I段母线(21)通过变电所中压开关(2)与整流变压器(5)相连,再与牵引整流装置(6)相连,整流输出DC1500V或DC750V直流电源至变电所直流母线(7),输出连接至工程线路的直流牵引网(8);在牵引变电所附近通过牵引网绝缘分段(9)将直流牵引网(8)分为左右两侧,机车车辆(10)通过直流牵引网(8)取电实现正常运行,该部分构成牵引供电系统。
3.如权利要求2所述一种基于三相逆变电源装置的轨道交通供电系统,其特征是:所述各牵引降压混合变电所(B1)和降压变电所(B2)内,变电所中压I段母线(21)通过变电所中压开关(2)与变电所内配置的一台配电变压器(11)相连,降压输出一回0.4kV低压电源;同时各变电所内配置的一套三相逆变电源装置(4)与直流牵引网(8)相连取得DC1500V或DC750V输入电源,逆变输出一回独立的0.4kV低压电源;该两回独立的0.4kV低压电源共同向低压动照一级负荷(12)供电,该部分构成低压供配电系统。'
sentences:
- '1.一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,包括能量管理平台、由多个海上油田平台构建的海上油田群组以及能量调用模块,所述能量管理平台与所述海上海上油田群组电性连接,所述能量管理平台还与所述能量调用模块电性连接,其中:
所述海上油田群组,用于采集海上油田平台的运行参数,将该运行参数进行处理后发送给能量管理平台;
所述能量管理平台,用于接收海上油田群组发送的运行参数,进行能量匹配建模,完成能量匹配任务,并将能量调用指令发送给能量调用模块;
所述能量调用模块,用于接收能量调用指令,向海上油田群组分配能量调用任务。
2.根据权利要求1所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述海上油田平台包括信号采集端、信号转化端、信号存储端和信号发送端,所述信号采集端与所述信号转化端电性连接,所述信号转化端和所述信号存储端电性连接,所述信号存储端与所述信号发送端电性连接。
3.根据权利要求2所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述信号采集端用于采集海上油田平台的运行参数,并将采集得到的运行参数发送给信号转化端,所述信号转化端接收所述信号采集端发送的运行参数,将该运行参数转化为数字信号后发送给信号存储端,信号存储端接收该数字信号并发送给信号发送端。
4.根据权利要求1所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述能量管理平台包括信号接收端、能量匹配端和电网信息获取端、指令传输端和反馈端,所述信息接收端和电网信息获取端均与能量匹配端电性连接,所述能量匹配端和所述指令传输端电性连接。
5.根据权利要求4所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述能量管理平台还包括有反馈端,所述反馈端与所述指令传输端电性连接。
6.根据权利要求1所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述能量调用模块包括有能量调度模块、指令接收模块和指令发送模块,所述能量调度模块分别与指令接收模块、指令发送模块电性连接。
7.根据权利要求6所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述能量调用模块还包括有互联网云端,所述互联网云端与所述能量调度模块电性连接。
8.根据权利要求1-7任一所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,还包括有储能模块,所述储能模块与所述能量管理平台电性连接,储能模块用于存储能量互联后剩余能量的存储。
9.根据权利要求8所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述储能模块包括有指令接收端和匹配单元,所述指令接收端与所述匹配单元电性连接,所述指令接收端用于接收能量调用模块发送的能量调用指令,并将该能量调用指令发送给匹配单元,由匹配单元进行能量匹配存储。
10.根据权利要求9所述的一种基于岸基供电的海上油田群组的互联能量管理系统,其特征在于,所述储能模块还包括有统计单元和显示单元,所述统计单元与所述显示单元电性连接。'
- 1.一种供电自动切换方法,用于轨道交通中压环网供电系统,其特征在于,所述供电自动切换方法包括:获取中压环网供电系统中每个供电区的供电检测数据,其中,所述供电检测数据包括外供电进线电压、外供电进线断路器状态、外供电进线断路器电流、变电所母线电压、环网断路器状态、环网断路器电流、环网断路器对应的跳闸信号;根据所述供电检测数据识别故障供电区和故障类型,并根据故障类型确定采用的切换控制模式,其中,所述切换控制模式包括外供电故障切换模式和单环网内故障切换模式;根据所述切换控制模式对所述故障供电区的供电进行切换;其中,所述根据所述供电检测数据识别故障供电区和故障类型,并根据故障类型确定采用的切换控制模式,包括:若确定所述外供电进线电压小于进线电压阈值、所述外供电进线断路器处于合闸状态、所述外供电进线断路器电流小于电流阈值、所述变电所母线电压小于母线电压阈值,则对应供电区为所述故障供电区,并确定为外部供电故障,采用所述外供电故障切换模式;若确定所述环网断路器处于分闸状态、所述环网断路器电流小于电流阈值、检测到所述环网断路器对应的跳闸信号,则对应供电区为故障供电区,并确定为环网内部故障,采用所述单环网内故障切换模式。2.根据权利要求1所述的供电自动切换方法,其特征在于,所述根据所述切换控制模式对所述故障供电区的供电进行切换,包括:判断所述故障供电区的相邻供电区的外供电进线电压是否大于所述进线电压阈值,且所述相邻供电区的外供电进线断路器处于合闸状态;如果是,发送控制所述故障供电区的进线断路器的分闸控制信号,并发送断开所述故障供电区的均分开关的控制信号,以及,发送关闭所述故障供电区与所述相邻供电区之间的联络开关的控制信号。3.根据权利要求1所述的供电自动切换方法,其特征在于,所述根据所述切换控制模式对所述故障供电区的供电进行切换,包括:确定所述故障供电区中的子故障供电区,其中,所述故障供电区中变电所母线电压小于母线电压阈值的环网区域为所述子故障供电区;判断所述子故障供电区的相邻供电区的外供电进线电压是否大于进线电压阈值,且所述子故障供电区的相邻供电区的外供电进线断路器处于合闸状态;如果是,发送关闭所述子故障供电区与所述相邻供电区之间的联络开关的控制信号。4.根据权利要求1-3任一项所述的供电自动切换方法,其特征在于,所述供电自动切换方法还包括:接收切换反馈信号,并根据所述切换反馈信号进行切换状态提示。5.根据权利要求1-3任一项所述的供电自动切换方法,其特征在于,所述供电自动切换方法还包括:发送通讯故障检测信号,并接收通讯反馈信号,根据所述通讯反馈信号判断通讯是否故障。6.根据权利要求1所述的供电自动切换方法,其特征在于,所述供电检测数据包括外供电进线电压、变电所母线电压、断路器跳闸信号,所述供电自动切换方法还包括:确定同一供电区的所述外供电进线电压大于进线电压阈值、且所述变电所母线电压小于母线电压阈值、且未检测到所述断路器跳闸信号,识别为人工分闸操作。7.一种监控系统,其特征在于,包括:通讯装置,用于接收轨道交通中压环网供电系统每个供电区的供电检测数据;轨道交通电力监控系统,包括切换控制模块,所述切换控制模块用于执行如权利要求1-6任一项所述的供电自动切换方法。8.一种轨道交通中压环网系统,其特征在于,包括:供电系统,包括多个供电区,外供电引线通过进线断路器分别为每个所述供电区供电,相邻供电区之间设置联络开关,每个供电区包括多个变电所,每个变电所通过环网断路器连接;智能电子设备,用于检测所述供电系统中每个所述供电区的供电检测数据;如权利要求7所述的监控系统,所述监控系统与所述智能电子设备通过监控数据网络进行通讯。
- '{1.一种通过硬件实现的图像二值化处理方法,包括:,RAM初始化步骤:将用于记录各个像素值的像素个数的随机存取存储器(RAM)初始化为零,所述RAM包含256个数据存储单元,每个数据存储单元在RAM中的地址对应于其记录像素个数的像素值;,图像读取步骤:通过直接内存存取(DMA)单元,读取待处理图像,每读入一个或多个像素值,将所述RAM中该像素值对应的数据存储单元中的像素个数值累积加1,直至完成待处理图像的读取为止;,总乘积和值计算步骤:通过至少两组第一乘法器,并行地计算其对应分区内的各个所述像素值与其像素个数值的乘积和值,并且将各个乘积和值相加得到总乘积和值GSUM,其中,每组所述第一乘法器对应于所述RAM的一个分区;,类间方差计算步骤:通过至少三个第二乘法器以及除法器并根据所述总乘积和值GSUM,迭代地计算对应于各个像素值的类间方差值G,,并且通过比较器将计算得到的类间方差值G,与已有的最大类间方差值G,进行比较,获得当前的最大类间方差值G,",i∈[0,255];",阈值确定步骤:将经过迭代计算和比较获得的最大类间方差值G,作为图像二值化的阈值。,2.根据权利要求1所述的方法,其中,通过主状态机执行所述图像二值化处理方法,通过子状态机执行所述类间方差计算步骤。,3.根据权利要求1所述的方法,其中,所述RAM的寻址位宽为8,所述RAM的数据读入宽度为2,",n∈[0,7]。",4.根据权利要求1~3中任一项所述的方法,其中,所述至少两组第一乘法器为两组,,在所述总乘积和值计算步骤,第一组第一乘法器从地址0开始,将各个地址对应的像素值与其像素个数进行相乘累加,累加到地址127为止,第二组第一乘法器从地址255开始,并行地将各个地址对应的像素值与其像素个数进行相乘累加,累加到地址128为止,再将第一组第一乘法器累加得到的乘积和值与第二组第一乘法器累加得到的乘积和值相加,获得所述总乘积和值GSUM。,5.根据权利要求4所述的方法,其中,,在所述类间方差计算步骤中,通过以下公式迭代地计算获得对应于像素值i的类间方差值G,:,其中,S为待处理图像的总像素个数,Num,为像素值i对应的像素个数,GSum,为从0像素值与其对应的像素个数到像素值i与其对应的像素个数的乘积和值。,6.根据权利要求5所述的方法,其中,所述至少三个第二乘法器为三个,,在所述类间方差计算步骤中,通过第一个第二乘法器计算GSum,,通过第二个第二乘法器和第三个第二乘法器并行地执行各两次乘法计算GSum,×S、GSUM×Num,、Num,×(S-Num,)以及(GSum,×S-GSUM×Num,),。,7.一种通过硬件实现的图像二值化处理装置,包括:,RAM初始化模块,用于将用于记录各个像素值的像素个数的随机存取存储器(RAM)初始化为零,所述RAM包含256个数据存储单元,每个数据存储单元在RAM中的地址对应于其记录像素个数的像素值;,图像读取模块,用于通过直接内存存取(DMA)单元,读取待处理图像,每读入一个或多个像素值,将所述RAM中该像素值对应的数据存储单元中的像素个数值累积加1,直至完成待处理图像的读取为止;,总乘积和值计算模块,用于通过至少两组第一乘法器,并行地计算其对应分区内的各个所述像素值与其像素个数值的乘积和值,并且将各个乘积和值相加得到总乘积和值GSUM,其中,每组所述第一乘法器对应于所述RAM的一个分区;,类间方差计算模块,用于通过至少三个第二乘法器以及除法器并根据所述总乘积和值GSUM,迭代地计算对应于各个像素值的类间方差值G,,并且通过比较器将计算得到的类间方差值G,与已有的最大类间方差值G,进行比较,获得当前的最大类间方差值G,,i∈[0,255];,阈值确定模块,用于将经过迭代计算和比较获得的最大类间方差值G,确定为图像二值化的阈值。,8.根据权利要求7所述的装置,其中,所述至少两组第一乘法器为两组,,第一组第一乘法器用于从地址0开始,将各个地址对应的像素值与其像素个数进行相乘累加,累加到地址127为止,,第二组第一乘法器用于从地址255开始,并行地将各个地址对应的像素值与其像素个数进行相乘累加,累加到地址128为止,,所述总乘积和值计算模块用于将第一组第一乘法器累加得到的乘积和值与第二组第一乘法器累加得到的乘积和值相加,获得所述总乘积和值GSUM。,9.根据权利要求8所述的装置,其中,所述类间方差计算模块用于通过以下公式迭代地计算获得对应于像素值i的类间方差值G,:,其中,S为待处理图像的总像素个数,Num,为像素值i对应的像素个数,GSum,为从0像素值与其对应的像素个数到像素值i与其对应的像素个数的乘积和值。,10.根据权利要求9所述的装置,其中,所述至少三个第二乘法器为三个,,所述类间方差计算模块用于通过第一个第二乘法器计算GSum,,通过第二个第二乘法器和第三个第二乘法器并行地执行各两次乘法计算GSum,×S、GSUM×Num,、Num,×(S-Num,)以及(GSum,×S-GSUM×Num,),。,11.一种计算机可读存储介质,其上存储有计算机程序指令,其中,所述程序指令被处理器执行时实现权利要求1~6中任一项所述图像二值化处理方法的模块。,12.一种电子设备,包括:处理器、存储器、通信元件和通信总线,所述处理器、所述存储器和所述通信元件通过所述通信总线完成相互间的通信;,所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如权利要求1~6中任一项所述图像二值化处理方法对应的操作。}'
- source_sentence: '1.一种面向能源互联网调配管理的数字化系统,其特征在于,包括数字化控制模块(1)、能源站群组、输电线路监测模块(3)、电网稳定性监测模块(4)、供电设备监测模块(5)、发电设备监测模块(6)和负荷供电模块(7),所述能源站群组包括若干电性连接的能源站(2),其中:
所述数字化控制模块(1)分别与能源站群组、输电线路监测模块(3)、电网稳定性监测模块(4)、供电设备监测模块(5)、发电设备监测模块(6)电性连接,所述供电设备监测模块(5)和发电设备监测模块(6)与所述负荷供电模块(7)电性连接。
2.根据权利要求1所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述输电线路监测模块(3)、电网稳定性监测模块(4)、供电设备监测模块(5)、发电设备监测模块(6)用于监测电网运行中供电设备、发电设备、输电线路和用电设备的运行参数,并将该运行参数发送给数字化控制模块(1)。
3.根据权利要求2所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述数字化控制模块(1)用于接收输电线路监测模块(3)、电网稳定性监测模块(4)、供电设备监测模块(5)、发电设备监测模块(6)发送的运行参数,对该运行参数进行数据分析,获取数据分析结果,依据该数据分析结果进行供配电调配。
4.根据权利要求3所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述数字化控制模块(1)包括有控制单元(11)、指令单元(14)、决策单元(13)和信息收发单元(12)。
5.根据权利要求4所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述控制单元(11)分别与指令单元14、决策单元(13)和信息收发单元(12)电性连接,所述指令单元(14)和决策单元(13)电性连接,所述决策单元(13)和信息收发单元(12)电性连接。
6.根据权利要求5所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述电网稳定性监测模块(4)包括有数据采集单元(41)、数据转化单元(42)、分析单元(43)和数据交换单元(44),所述数据采集单元(41)、数据转化单元(42)、分析单元(43)均与数据交换单元(44)电性连接。
7.根据权利要求6所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述负荷供电模块(7)包括有需求获取单元(71)、供电统计单元(72)、匹配单元(73)和切负荷单元(74)。
8.根据权利要求7所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述需求获取单元(71)、供电统计单元(72)均与匹配单元(73)电性连接,所述匹配单元(73)与切负荷单元(74)电性连接。
9.根据权利要求8所述的一种面向能源互联网调配管理的数字化系统,其特征在于,还包括互联网云端,所述互联网云端与所述数字化控制模块(1)电性连接。
10.根据权利要求9所述的一种面向能源互联网调配管理的数字化系统,其特征在于,互联网云端用于连接互联网数据库,所述数字化控制模块(1)用于向互联网云端发送数据调用指令,互联网云端接收该数据调用指令,依据数据调用指令调取数据后发送给互联网云端。'
sentences:
- '{1.一种资源指示方法,其特征在于,包括:,获取下行控制信息;其中,所述下行控制信息包括第一信令;,当所述第一信令用于指示第一时间段内没有数据调度,以及用于确定所述第一时间段内是否为终端配置参考信号资源时,根据所述第一信令,处理所述终端在所述第一时间段内的行为;和/或,,当所述第一信令用于指示在第一起始时刻之后没有数据调度,以及用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源时,根据所述第一信令,处理所述终端从所述第一起始时刻之后的行为。,2.根据权利要求1所述的方法,其特征在于,所述第一信令用于确定所述第一时间段内是否为终端配置参考信号资源,包括:,所述第一信令用于显性指示所述第一时间段内是否为终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,3.根据权利要求2所述的方法,其特征在于,所述第一信令包括第一指示;,其中,所述第一指示用于指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一指示用于指示从所述第一起始时刻之后到不连续接收DRX非激活态定时器计时超时时刻之间的时间段内是否为所述终端配置参考信号资源。,4.根据权利要求2所述的方法,其特征在于,所述第一信令包括第一位图,所述第一位图包括一个或者多个比特,所述一个或者多个比特与所述第一起始时刻之后的一个或者多个参考信号资源一一关联;,所述一个或者多个比特中任一个比特用于指示是否为所述终端在所述第一起始时刻之后配置与所述任一个比特关联的参考信号资源。,5.根据权利要求2所述的方法,其特征在于,所述第一信令还用于指示第二时间段,所述第二时间段的起始时刻为第二起始时刻,所述第一信令还用于指示在所述第二时间段内,为所述终端未配置所述参考信号资源。,6.根据权利要求1所述的方法,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,7.根据权利要求6所述的方法,其特征在于,所述第一信令隐式指示所述第一时间段内是否为所述终端配置参考信号资源,包括:所述第一信令用于指示所述第一时间段。,8.根据权利要求6或7所述的方法,其特征在于,所述第一信令隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:所述第一信令用于指示在所述第一起始时刻停止DRX持续时间定时器和DRX非激活态定时器。,9.根据权利要求1-8任一项所述的方法,其特征在于,所述根据所述第一信令,处理所述终端在所述第一时间段内的行为,包括:,当第一信令指示所述第一时间段内为所述终端配置参考信号资源时,确定在所述第一时间段内是否需要执行移动性无线电资源管理测量;当所述第一信令指示所述第一时间段内未为所述终端配置参考信号资源时,所述终端进入睡眠状态,并且在所述第一时间段内不执行移动性无线电资源管理测量;和/或,,所述根据所述第一信令,处理所述终端从所述第一起始时刻之后的行为,包括:,当第一信令指示所述第一起始时刻之后为所述终端配置参考信号资源时,确定在所述第一起始时刻之后为所述终端配置参考信号资源的时间频率位置上是否需要执行移动性无线电资源管理测量;当所述第一信令指示从所述第一起始时刻之后未为所述终端配置参考信号资源时,所述终端进入睡眠状态,并且在所述第一信令指示未为所述终端配置参考信号资源的时间频率位置上不执行移动性无线电资源管理测量。,10.一种资源指示方法,其特征在于,包括:,确定下行控制信息;,其中,所述下行控制信息包括第一信令,所述第一信令用于指示第一时间段内没有数据调度,以及用于确定所述第一时间段内是否为终端配置参考信号资源;和/或,所述第一信令用于指示在第一起始时刻之后没有数据调度,以及用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源;,向所述终端发送所述下行控制信息。,11.根据权利要求10所述的方法,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,12.根据权利要求11所述的方法,其特征在于,所述第一信令包括第一指示;,其中,所述第一指示用于指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一指示用于指示从所述第一起始时刻之后到不连续接收DRX非激活态定时器计时超时时刻之间的时间段内是否为所述终端配置参考信号资源。,13.根据权利要求11所述的方法,其特征在于,所述第一信令包括第一位图,所述第一位图包括一个或者多个比特,所述一个或者多个比特与所述第一起始时刻之后的一个或者多个参考信号资源一一关联;,所述一个或者多个比特中任一个比特用于指示是否为所述终端在所述第一起始时刻之后配置与所述任一个比特关联的参考信号资源。,14.根据权利要求11所述的方法,其特征在于,所述第一信令还用于指示第二时间段,所述第二时间段的起始时刻为第二起始时刻,所述第一信令还用于指示在所述第二时间段内,为所述终端未配置所述参考信号资源。,15.根据权利要求10所述的方法,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,16.根据权利要求15所述的方法,其特征在于,所述第一信令隐式指示所述第一时间段内是否为所述终端配置参考信号资源,包括:所述第一信令用于指示所述第一时间段。,17.根据权利要求15或16所述的方法,其特征在于,所述第一信令隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:所述第一信令用于指示在所述第一起始时刻停止DRX持续时间定时器和DRX非激活态定时器。,18.一种资源指示装置,其特征在于,包括:收发单元和处理单元,其中,,所述收发单元,用于获取下行控制信息;其中,所述下行控制信息包括第一信令;,当所述第一信令用于指示第一时间段内没有数据调度,以及用于确定所述第一时间段内是否为终端配置参考信号资源时,所述处理单元,用于根据所述第一信令,处理所述终端在所述第一时间段内的行为;和/或,,当所述第一信令用于指示在第一起始时刻之后没有数据调度,以及用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源时,所述处理单元,用于根据所述第一信令,处理所述终端从所述第一起始时刻之后的行为。,19.根据权利要求18所述的装置,其特征在于,所述第一信令用于确定所述第一时间段内是否为终端配置参考信号资源,包括:,所述第一信令用于显性指示所述第一时间段内是否为终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,20.根据权利要求19所述的装置,其特征在于,所述第一信令包括第一指示;,其中,所述第一指示用于指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一指示用于指示从所述第一起始时刻之后到不连续接收DRX非激活态定时器计时超时时刻之间的时间段内是否为所述终端配置参考信号资源。,21.根据权利要求19所述的装置,其特征在于,所述第一信令包括第一位图,所述第一位图包括一个或者多个比特,所述一个或者多个比特与所述第一起始时刻之后的一个或者多个参考信号资源一一关联;,所述一个或者多个比特中任一个比特用于指示是否为所述终端在所述第一起始时刻之后配置与所述任一个比特关联的参考信号资源。,22.根据权利要求19所述的装置,其特征在于,所述第一信令还用于指示第二时间段,所述第二时间段的起始时刻为第二起始时刻,所述第一信令还用于指示在所述第二时间段内,为所述终端未配置所述参考信号资源。,23.根据权利要求18所述的装置,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,24.根据权利要求23所述的装置,其特征在于,所述第一信令隐式指示所述第一时间段内是否为所述终端配置参考信号资源,包括:所述第一信令用于指示所述第一时间段。,25.根据权利要求23或24所述的装置,其特征在于,所述第一信令隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:所述第一信令用于指示在所述第一起始时刻停止DRX持续时间定时器和DRX非激活态定时器。,26.根据权利要求18-25任一项所述的装置,其特征在于,当第一信令指示所述第一时间段内为所述终端配置参考信号资源时,所述处理单元,具体用于确定在所述第一时间段内是否需要执行移动性无线电资源管理测量;当所述第一信令指示所述第一时间段内未为所述终端配置参考信号资源时,所述处理单元,具体用于控制所述终端进入睡眠状态,并且在所述第一时间段内不执行移动性无线电资源管理测量;和/或,,当第一信令指示所述第一起始时刻之后为所述终端配置参考信号资源时,所述处理单元,具体用于确定在所述第一起始时刻之后为所述终端配置参考信号资源的时间频率位置上是否需要执行移动性无线电资源管理测量;当所述第一信令指示从所述第一起始时刻之后未为所述终端配置参考信号资源时,所述处理单元,具体用于控制所述终端进入睡眠状态,并且在所述第一信令指示未为所述终端配置参考信号资源的时间频率位置上不执行移动性无线电资源管理测量。,27.一种资源指示装置,其特征在于,包括:,处理单元,用于确定下行控制信息;,其中,所述下行控制信息包括第一信令,所述第一信令用于指示第一时间段内没有数据调度,以及用于确定所述第一时间段内是否为终端配置参考信号资源;和/或,所述第一信令用于指示在第一起始时刻之后没有数据调度,以及用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源;,收发单元,用于向所述终端发送所述下行控制信息。,28.根据权利要求27所述的装置,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,29.根据权利要求28所述的装置,其特征在于,所述第一信令包括第一指示;,其中,所述第一指示用于指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一指示用于指示从所述第一起始时刻之后到不连续接收DRX非激活态定时器计时超时时刻之间的时间段内是否为所述终端配置参考信号资源。,30.根据权利要求28所述的装置,其特征在于,所述第一信令包括第一位图,所述第一位图包括一个或者多个比特,所述一个或者多个比特与所述第一起始时刻之后的一个或者多个参考信号资源一一关联;,所述一个或者多个比特中任一个比特用于指示是否为所述终端在所述第一起始时刻之后配置与所述任一个比特关联的参考信号资源。,31.根据权利要求28所述的装置,其特征在于,所述第一信令还用于指示第二时间段,所述第二时间段的起始时刻为第二起始时刻,所述第一信令还用于指示在所述第二时间段内,为所述终端未配置所述参考信号资源。,32.根据权利要求27所述的装置,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,33.根据权利要求32所述的装置,其特征在于,所述第一信令隐式指示所述第一时间段内是否为所述终端配置参考信号资源,包括:所述第一信令用于指示所述第一时间段。,34.根据权利要求32或33所述的装置,其特征在于,所述第一信令隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:所述第一信令用于指示在所述第一起始时刻停止DRX持续时间定时器和DRX非激活态定时器。,35.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当所述指令被运行时,实现上述权利要求1-9任一项所述的资源指示方法、和/或,权利要求10-17任一项所述的资源指示方法。,36.一种资源指示装置,其特征在于,所述装置包括处理器和存储介质,所述存储介质存储有指令,所述指令被所述处理器运行时,实现如权利要求1至9任一项所述的资源指示方法,或者实现权利要求10至17任一项所述的任一项所述的资源指示方法。}'
- '1.一种典型故障下岸基供电海上油田群运行控制方法,应用于海上油田群电力系统,其特征在于,包括以下步骤:
S001、海上油田群电力系统发生电力故障时,工作人员通过故障上报端向运行控制中心上报故障;
S002、运行控制中心接收得到故障上报端上报的故障后发送给故障解决方案生成模块,故障解决方案生成模块先进入本地处理模式,本地处理模式无法完成时进入联网处理模式,本地处理模式或联网处理模式完成时由本地处理模式或联网处理模式生成解决方案,并依照解决方案进行故障处理;
S003、联网处理模式无法完成时,海上油田群电力系统进行切负荷作业。
2.根据权利要求1所述的典型故障下岸基供电海上油田群运行控制方法,其特征在于,步骤S002中,本地处理模式为:由故障解决方案生成模块连接故障数据库,从故障数据库寻找与故障上报端上报故障相同的故障,采用该故障的解决方案。
3.根据权利要求2所述的典型故障下岸基供电海上油田群运行控制方法,其特征在于,步骤S002中,联网处理模式为:由故障解决方案生成模块连接互联网云端,从互联网云端寻找与故障上报端上报故障相同的故障,采用该故障的解决方案。
4.根据权利要求2所述的典型故障下岸基供电海上油田群运行控制方法,其特征在于,步骤S002中,本地处理模式还包括有专家合议,由海上油田群现场专家共同给出故障的解决方案。
5.根据权利要求1所述的典型故障下岸基供电海上油田群运行控制方法,其特征在于,还包括有步骤S004:故障处理完成后,记录解决本次故障的解决方案,发送给故障解决方案生成模块,由故障解决方案生成模块录入故障数据库。
6.根据权利要求1所述的典型故障下岸基供电海上油田群运行控制方法,其特征在于,所述海上油田群电力系统包括有运行控制中心、故障上报端、故障解决方案生产模块和解决方案执行模块,所述运行控制中心与所述故障上报端通信连接,所述运行控制中心与所述故障解决方案生产模块电性连接,所述故障解决方案生产模块与所述解决方案执行模块通信连接。
7.根据权利要求6所述的典型故障下岸基供电海上油田群运行控制方法,其特征在于,所述海上油田群电力系统还包括有互联网云端和故障数据库,所述互联网云端与运行控制中心通信连接,所述故障数据库与所述故障解决方案生成模块通信连接。
8.根据权利要求7所述的典型故障下岸基供电海上油田群运行控制方法,其特征在于,所述运行控制中心包括有设备监测模块、故障接收端、数据处理模块、指令生成模块和指令发送模块,所述设备监测模块、故障接收端均与数据处理模块电性连接,数据处理模块与指令生成模块电性连接,指令生成模块与指令发送模块电性连接。
9.根据权利要求8所述的典型故障下岸基供电海上油田群运行控制方法,其特征在于,所述故障解决方案生成模块包括指令接收模块、数据调用模块和解决方案启动模块,所述指令接收模块、数据调用模块均与解决方案启动模块电性连接。
10.根据权利要求9所述的典型故障下岸基供电海上油田群运行控制方法,其特征在于,所述故障解决方案生成模块还包括有反馈模块和故障解决效果验收模块,所述反馈模块和故障解决效果验收模块均与解决方案启动模块通信连接。'
- '1.一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,包括智能控制平台(1)、海上油田群组、输电线路传输监测模块(3)、电网稳定性监测平台(4)、供电设备监测平台(5)、发电设备监测平台(6)和负荷供电模块(7),所述海上油田群组包括若干电性连接的海上油田平台(2),其中:
所述智能控制平台(1)分别与海上油田群组、输电线路传输监测模块(3)、电网稳定性监测平台(4)、供电设备监测平台(5)、发电设备监测平台(6)电性连接,所述供电设备监测平台(5)和发电设备监测平台(6)与所述负荷供电模块(7)电性连接。
2.根据权利要求1所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述输电线路传输监测模块(3)、电网稳定性监测平台(4)、供电设备监测平台(5)、发电设备监测平台(6)用于监测电网运行中供电设备、发电设备、输电线路和用电设备的运行参数,并将该运行参数发送给智能控制平台(1)。
3.根据权利要求2所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述智能控制平台(1)用于接收输电线路传输监测模块(3)、电网稳定性监测平台(4)、供电设备监测平台(5)、发电设备监测平台(6)发送的运行参数,对该运行参数进行数据分析,获取数据分析结果,依据该数据分析结果进行供配电调配。
4.根据权利要求3所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述智能控制平台(1)包括有总控台(101)、指令收发模块(104)、决策模块(103)和信息接收端(102)。
5.根据权利要求4所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述总控台(101)分别与指令收发模块(104)、决策模块(103)和信息接收端(102)电性连接,所述指令收发模块(104)和决策模块(103)电性连接,所述决策模块(103)和信息接收端(102)电性连接。
6.根据权利要求5所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述电网稳定性监测平台(4)包括有电力数据采集端(401)、模数信号转化模块(402)、稳定性分析模块(403)和数据交换模块(404),所述电力数据采集端(401)、模数信号转化模块(402)、稳定性分析模块(403)均与数据交换模块(404)电性连接。
7.根据权利要求6所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述负荷供电模块(7)包括有负载需求获取模块(701)、供电设备统计模块(702)、匹配模块(703)和切负荷模块(704)。
8.根据权利要求7所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述负载需求获取模块(701)、供电设备统计模块(702)均与匹配模块(703)电性连接,所述匹配模块(703)与切负荷模块(704)电性连接。
9.根据权利要求8所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,还包括互联网云端(8),所述互联网云端(8)与所述智能控制平台(1)电性连接。
10.根据权利要求9所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,互联网云端(8)用于连接互联网数据库,所述智能控制平台(1)用于向互联网云端(8)发送数据调用指令,互联网云端(8)接收该数据调用指令,依据数据调用指令调取数据后发送给互联网云端(8)。'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on BAAI/bge-small-zh-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5). It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) <!-- at revision 7999e1d3359715c523056ef9478215996d62a620 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 512 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'1.一种面向能源互联网调配管理的数字化系统,其特征在于,包括数字化控制模块(1)、能源站群组、输电线路监测模块(3)、电网稳定性监测模块(4)、供电设备监测模块(5)、发电设备监测模块(6)和负荷供电模块(7),所述能源站群组包括若干电性连接的能源站(2),其中:\n\n所述数字化控制模块(1)分别与能源站群组、输电线路监测模块(3)、电网稳定性监测模块(4)、供电设备监测模块(5)、发电设备监测模块(6)电性连接,所述供电设备监测模块(5)和发电设备监测模块(6)与所述负荷供电模块(7)电性连接。\n\n2.根据权利要求1所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述输电线路监测模块(3)、电网稳定性监测模块(4)、供电设备监测模块(5)、发电设备监测模块(6)用于监测电网运行中供电设备、发电设备、输电线路和用电设备的运行参数,并将该运行参数发送给数字化控制模块(1)。\n\n3.根据权利要求2所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述数字化控制模块(1)用于接收输电线路监测模块(3)、电网稳定性监测模块(4)、供电设备监测模块(5)、发电设备监测模块(6)发送的运行参数,对该运行参数进行数据分析,获取数据分析结果,依据该数据分析结果进行供配电调配。\n\n4.根据权利要求3所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述数字化控制模块(1)包括有控制单元(11)、指令单元(14)、决策单元(13)和信息收发单元(12)。\n\n5.根据权利要求4所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述控制单元(11)分别与指令单元14、决策单元(13)和信息收发单元(12)电性连接,所述指令单元(14)和决策单元(13)电性连接,所述决策单元(13)和信息收发单元(12)电性连接。\n\n6.根据权利要求5所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述电网稳定性监测模块(4)包括有数据采集单元(41)、数据转化单元(42)、分析单元(43)和数据交换单元(44),所述数据采集单元(41)、数据转化单元(42)、分析单元(43)均与数据交换单元(44)电性连接。\n\n7.根据权利要求6所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述负荷供电模块(7)包括有需求获取单元(71)、供电统计单元(72)、匹配单元(73)和切负荷单元(74)。\n\n8.根据权利要求7所述的一种面向能源互联网调配管理的数字化系统,其特征在于,所述需求获取单元(71)、供电统计单元(72)均与匹配单元(73)电性连接,所述匹配单元(73)与切负荷单元(74)电性连接。\n\n9.根据权利要求8所述的一种面向能源互联网调配管理的数字化系统,其特征在于,还包括互联网云端,所述互联网云端与所述数字化控制模块(1)电性连接。\n\n10.根据权利要求9所述的一种面向能源互联网调配管理的数字化系统,其特征在于,互联网云端用于连接互联网数据库,所述数字化控制模块(1)用于向互联网云端发送数据调用指令,互联网云端接收该数据调用指令,依据数据调用指令调取数据后发送给互联网云端。',
'1.一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,包括智能控制平台(1)、海上油田群组、输电线路传输监测模块(3)、电网稳定性监测平台(4)、供电设备监测平台(5)、发电设备监测平台(6)和负荷供电模块(7),所述海上油田群组包括若干电性连接的海上油田平台(2),其中:\n\n所述智能控制平台(1)分别与海上油田群组、输电线路传输监测模块(3)、电网稳定性监测平台(4)、供电设备监测平台(5)、发电设备监测平台(6)电性连接,所述供电设备监测平台(5)和发电设备监测平台(6)与所述负荷供电模块(7)电性连接。\n\n2.根据权利要求1所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述输电线路传输监测模块(3)、电网稳定性监测平台(4)、供电设备监测平台(5)、发电设备监测平台(6)用于监测电网运行中供电设备、发电设备、输电线路和用电设备的运行参数,并将该运行参数发送给智能控制平台(1)。\n\n3.根据权利要求2所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述智能控制平台(1)用于接收输电线路传输监测模块(3)、电网稳定性监测平台(4)、供电设备监测平台(5)、发电设备监测平台(6)发送的运行参数,对该运行参数进行数据分析,获取数据分析结果,依据该数据分析结果进行供配电调配。\n\n4.根据权利要求3所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述智能控制平台(1)包括有总控台(101)、指令收发模块(104)、决策模块(103)和信息接收端(102)。\n\n5.根据权利要求4所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述总控台(101)分别与指令收发模块(104)、决策模块(103)和信息接收端(102)电性连接,所述指令收发模块(104)和决策模块(103)电性连接,所述决策模块(103)和信息接收端(102)电性连接。\n\n6.根据权利要求5所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述电网稳定性监测平台(4)包括有电力数据采集端(401)、模数信号转化模块(402)、稳定性分析模块(403)和数据交换模块(404),所述电力数据采集端(401)、模数信号转化模块(402)、稳定性分析模块(403)均与数据交换模块(404)电性连接。\n\n7.根据权利要求6所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述负荷供电模块(7)包括有负载需求获取模块(701)、供电设备统计模块(702)、匹配模块(703)和切负荷模块(704)。\n\n8.根据权利要求7所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,所述负载需求获取模块(701)、供电设备统计模块(702)均与匹配模块(703)电性连接,所述匹配模块(703)与切负荷模块(704)电性连接。\n\n9.根据权利要求8所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,还包括互联网云端(8),所述互联网云端(8)与所述智能控制平台(1)电性连接。\n\n10.根据权利要求9所述的一种基于岸基供电的海上油田群组一体化智能控制系统,其特征在于,互联网云端(8)用于连接互联网数据库,所述智能控制平台(1)用于向互联网云端(8)发送数据调用指令,互联网云端(8)接收该数据调用指令,依据数据调用指令调取数据后发送给互联网云端(8)。',
'{1.一种资源指示方法,其特征在于,包括:,获取下行控制信息;其中,所述下行控制信息包括第一信令;,当所述第一信令用于指示第一时间段内没有数据调度,以及用于确定所述第一时间段内是否为终端配置参考信号资源时,根据所述第一信令,处理所述终端在所述第一时间段内的行为;和/或,,当所述第一信令用于指示在第一起始时刻之后没有数据调度,以及用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源时,根据所述第一信令,处理所述终端从所述第一起始时刻之后的行为。,2.根据权利要求1所述的方法,其特征在于,所述第一信令用于确定所述第一时间段内是否为终端配置参考信号资源,包括:,所述第一信令用于显性指示所述第一时间段内是否为终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,3.根据权利要求2所述的方法,其特征在于,所述第一信令包括第一指示;,其中,所述第一指示用于指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一指示用于指示从所述第一起始时刻之后到不连续接收DRX非激活态定时器计时超时时刻之间的时间段内是否为所述终端配置参考信号资源。,4.根据权利要求2所述的方法,其特征在于,所述第一信令包括第一位图,所述第一位图包括一个或者多个比特,所述一个或者多个比特与所述第一起始时刻之后的一个或者多个参考信号资源一一关联;,所述一个或者多个比特中任一个比特用于指示是否为所述终端在所述第一起始时刻之后配置与所述任一个比特关联的参考信号资源。,5.根据权利要求2所述的方法,其特征在于,所述第一信令还用于指示第二时间段,所述第二时间段的起始时刻为第二起始时刻,所述第一信令还用于指示在所述第二时间段内,为所述终端未配置所述参考信号资源。,6.根据权利要求1所述的方法,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,7.根据权利要求6所述的方法,其特征在于,所述第一信令隐式指示所述第一时间段内是否为所述终端配置参考信号资源,包括:所述第一信令用于指示所述第一时间段。,8.根据权利要求6或7所述的方法,其特征在于,所述第一信令隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:所述第一信令用于指示在所述第一起始时刻停止DRX持续时间定时器和DRX非激活态定时器。,9.根据权利要求1-8任一项所述的方法,其特征在于,所述根据所述第一信令,处理所述终端在所述第一时间段内的行为,包括:,当第一信令指示所述第一时间段内为所述终端配置参考信号资源时,确定在所述第一时间段内是否需要执行移动性无线电资源管理测量;当所述第一信令指示所述第一时间段内未为所述终端配置参考信号资源时,所述终端进入睡眠状态,并且在所述第一时间段内不执行移动性无线电资源管理测量;和/或,,所述根据所述第一信令,处理所述终端从所述第一起始时刻之后的行为,包括:,当第一信令指示所述第一起始时刻之后为所述终端配置参考信号资源时,确定在所述第一起始时刻之后为所述终端配置参考信号资源的时间频率位置上是否需要执行移动性无线电资源管理测量;当所述第一信令指示从所述第一起始时刻之后未为所述终端配置参考信号资源时,所述终端进入睡眠状态,并且在所述第一信令指示未为所述终端配置参考信号资源的时间频率位置上不执行移动性无线电资源管理测量。,10.一种资源指示方法,其特征在于,包括:,确定下行控制信息;,其中,所述下行控制信息包括第一信令,所述第一信令用于指示第一时间段内没有数据调度,以及用于确定所述第一时间段内是否为终端配置参考信号资源;和/或,所述第一信令用于指示在第一起始时刻之后没有数据调度,以及用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源;,向所述终端发送所述下行控制信息。,11.根据权利要求10所述的方法,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,12.根据权利要求11所述的方法,其特征在于,所述第一信令包括第一指示;,其中,所述第一指示用于指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一指示用于指示从所述第一起始时刻之后到不连续接收DRX非激活态定时器计时超时时刻之间的时间段内是否为所述终端配置参考信号资源。,13.根据权利要求11所述的方法,其特征在于,所述第一信令包括第一位图,所述第一位图包括一个或者多个比特,所述一个或者多个比特与所述第一起始时刻之后的一个或者多个参考信号资源一一关联;,所述一个或者多个比特中任一个比特用于指示是否为所述终端在所述第一起始时刻之后配置与所述任一个比特关联的参考信号资源。,14.根据权利要求11所述的方法,其特征在于,所述第一信令还用于指示第二时间段,所述第二时间段的起始时刻为第二起始时刻,所述第一信令还用于指示在所述第二时间段内,为所述终端未配置所述参考信号资源。,15.根据权利要求10所述的方法,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,16.根据权利要求15所述的方法,其特征在于,所述第一信令隐式指示所述第一时间段内是否为所述终端配置参考信号资源,包括:所述第一信令用于指示所述第一时间段。,17.根据权利要求15或16所述的方法,其特征在于,所述第一信令隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:所述第一信令用于指示在所述第一起始时刻停止DRX持续时间定时器和DRX非激活态定时器。,18.一种资源指示装置,其特征在于,包括:收发单元和处理单元,其中,,所述收发单元,用于获取下行控制信息;其中,所述下行控制信息包括第一信令;,当所述第一信令用于指示第一时间段内没有数据调度,以及用于确定所述第一时间段内是否为终端配置参考信号资源时,所述处理单元,用于根据所述第一信令,处理所述终端在所述第一时间段内的行为;和/或,,当所述第一信令用于指示在第一起始时刻之后没有数据调度,以及用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源时,所述处理单元,用于根据所述第一信令,处理所述终端从所述第一起始时刻之后的行为。,19.根据权利要求18所述的装置,其特征在于,所述第一信令用于确定所述第一时间段内是否为终端配置参考信号资源,包括:,所述第一信令用于显性指示所述第一时间段内是否为终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,20.根据权利要求19所述的装置,其特征在于,所述第一信令包括第一指示;,其中,所述第一指示用于指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一指示用于指示从所述第一起始时刻之后到不连续接收DRX非激活态定时器计时超时时刻之间的时间段内是否为所述终端配置参考信号资源。,21.根据权利要求19所述的装置,其特征在于,所述第一信令包括第一位图,所述第一位图包括一个或者多个比特,所述一个或者多个比特与所述第一起始时刻之后的一个或者多个参考信号资源一一关联;,所述一个或者多个比特中任一个比特用于指示是否为所述终端在所述第一起始时刻之后配置与所述任一个比特关联的参考信号资源。,22.根据权利要求19所述的装置,其特征在于,所述第一信令还用于指示第二时间段,所述第二时间段的起始时刻为第二起始时刻,所述第一信令还用于指示在所述第二时间段内,为所述终端未配置所述参考信号资源。,23.根据权利要求18所述的装置,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,24.根据权利要求23所述的装置,其特征在于,所述第一信令隐式指示所述第一时间段内是否为所述终端配置参考信号资源,包括:所述第一信令用于指示所述第一时间段。,25.根据权利要求23或24所述的装置,其特征在于,所述第一信令隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:所述第一信令用于指示在所述第一起始时刻停止DRX持续时间定时器和DRX非激活态定时器。,26.根据权利要求18-25任一项所述的装置,其特征在于,当第一信令指示所述第一时间段内为所述终端配置参考信号资源时,所述处理单元,具体用于确定在所述第一时间段内是否需要执行移动性无线电资源管理测量;当所述第一信令指示所述第一时间段内未为所述终端配置参考信号资源时,所述处理单元,具体用于控制所述终端进入睡眠状态,并且在所述第一时间段内不执行移动性无线电资源管理测量;和/或,,当第一信令指示所述第一起始时刻之后为所述终端配置参考信号资源时,所述处理单元,具体用于确定在所述第一起始时刻之后为所述终端配置参考信号资源的时间频率位置上是否需要执行移动性无线电资源管理测量;当所述第一信令指示从所述第一起始时刻之后未为所述终端配置参考信号资源时,所述处理单元,具体用于控制所述终端进入睡眠状态,并且在所述第一信令指示未为所述终端配置参考信号资源的时间频率位置上不执行移动性无线电资源管理测量。,27.一种资源指示装置,其特征在于,包括:,处理单元,用于确定下行控制信息;,其中,所述下行控制信息包括第一信令,所述第一信令用于指示第一时间段内没有数据调度,以及用于确定所述第一时间段内是否为终端配置参考信号资源;和/或,所述第一信令用于指示在第一起始时刻之后没有数据调度,以及用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源;,收发单元,用于向所述终端发送所述下行控制信息。,28.根据权利要求27所述的装置,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于显性指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,29.根据权利要求28所述的装置,其特征在于,所述第一信令包括第一指示;,其中,所述第一指示用于指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一指示用于指示从所述第一起始时刻之后到不连续接收DRX非激活态定时器计时超时时刻之间的时间段内是否为所述终端配置参考信号资源。,30.根据权利要求28所述的装置,其特征在于,所述第一信令包括第一位图,所述第一位图包括一个或者多个比特,所述一个或者多个比特与所述第一起始时刻之后的一个或者多个参考信号资源一一关联;,所述一个或者多个比特中任一个比特用于指示是否为所述终端在所述第一起始时刻之后配置与所述任一个比特关联的参考信号资源。,31.根据权利要求28所述的装置,其特征在于,所述第一信令还用于指示第二时间段,所述第二时间段的起始时刻为第二起始时刻,所述第一信令还用于指示在所述第二时间段内,为所述终端未配置所述参考信号资源。,32.根据权利要求27所述的装置,其特征在于,所述第一信令用于确定所述第一时间段内是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示所述第一时间段内是否为所述终端配置参考信号资源;和/或,,所述第一信令用于确定从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:,所述第一信令用于隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源。,33.根据权利要求32所述的装置,其特征在于,所述第一信令隐式指示所述第一时间段内是否为所述终端配置参考信号资源,包括:所述第一信令用于指示所述第一时间段。,34.根据权利要求32或33所述的装置,其特征在于,所述第一信令隐式指示从所述第一起始时刻之后是否为所述终端配置参考信号资源,包括:所述第一信令用于指示在所述第一起始时刻停止DRX持续时间定时器和DRX非激活态定时器。,35.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当所述指令被运行时,实现上述权利要求1-9任一项所述的资源指示方法、和/或,权利要求10-17任一项所述的资源指示方法。,36.一种资源指示装置,其特征在于,所述装置包括处理器和存储介质,所述存储介质存储有指令,所述指令被所述处理器运行时,实现如权利要求1至9任一项所述的资源指示方法,或者实现权利要求10至17任一项所述的任一项所述的资源指示方法。}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 512]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 20,400 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 468 tokens</li><li>mean: 507.6 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 318 tokens</li><li>mean: 485.8 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 489.02 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>1.一种能量分配系统,其特征在于,包括a个双向节点,其中,a为大于等于1的整数,每个所述双向节点配置有一个第一能量输出模块与对应的一个第一能量输入模块,所有所述双向节点之间两两设置有一组开关器件。<br><br>2.根据权利要求1所述的能量分配系统,其特征在于,还包括b个输出节点,其中,b为大于等于0的整数,每个所述输出节点配置有一个第二能量输出模块,每个所述输出节点分别与各个所述双向节点之间设置有一组开关器件。<br><br>3.根据权利要求1或2所述的能量分配系统,其特征在于,还包括c个输入节点,其中,c为大于等于0的整数,每个所述输入节点配置有一个第二能量输入模块,每个所述输入节点分别与各个所述双向节点之间设置有一组开关器件。<br><br>4.根据权利要求1所述的能量分配系统,其特征在于,一组开关器件包括设置在能量输出模块正极与能量输入模块正极之间的第一开关器件、设置在能量输出模块负极与能量输入模块负极之间的第二开关器件。<br><br>5.根据权利要求1所述的能量分配系统,其特征在于,所述能量输出模块处于空闲状态或者占用状态,所述能量输入模块处于供能状态或者停止供能状态。</code> | <code>1.一种直流充电设备的功率分配系统,其特征在于:它包括至少两组充电模块(2)与充电终端(4)和功率分配装置(3),所述功率分配装置(3)分别连接于所述充电模块(2)和充电终端(4)之间,所述充电模块(2)与电网(1)连接;所述功率分配装置(3)包括多个功率分配单元,每个功率分配单元包括输入端口(31)、第一开关模块(32)和输出端口(34),每组充电模块(2)连接一个输入端口(31),所述输出端口(34)连接充电终端(4),所述输入端口(31)和输出端口(34)之间串联一组第一开关模块(32)。<br><br>2.根据权利要求1所述的一种直流充电设备的功率分配系统,其特征在于:所述充电终端(4)的数量不大于充电模块(2)的数量。<br><br>3.根据权利要求1所述的一种直流充电设备的功率分配系统,其特征在于:每个输入端口(31)和对应的输出端口(34)组成一个输出回路,任意两个输出回路之间连接一组第二开关模块(33),若有N个输出回路,则有组第二开关模块(33)用于连接所有的输出回路,N组第一开关模块(32)用于闭合输出回路。<br></code> | <code>{1.一种工艺建模整合系统,其特征在于,包括:,工艺流程导入模块,用于导入工艺流程各步骤和/或所述各步骤的参数结构;,系统逻辑验证模块,用于验证所述各步骤的完整性和正确性;,关联人定义模块,用于定义技术参数导入的多个关联人和/或验证的多个确认人;,技术参数导入和验证模块,用于导入技术参数并验证所述技术参数是否完整和正确;,数据整合模块,用于对验证通过的所述各步骤的所述技术参数进行整合;,输出模块,用以将所述数据整合模块输出的数据输出至建模系统。,2.如权利要求1所述的工艺建模整合系统,其特征在于,所述数据整合模块还用于对整合后的数据进行校验。,3.如权利要求1所述的工艺建模整合系统,其特征在于,所述数据整合模块可将数据整合结果反馈至关联人定义模块。,4.如权利要求1所述的工艺建模整合系统,其特征在于,所述技术参数验证模块包括:,技术参数导入模块,用于导入和/或补充所述各步骤的技术参数;,系统验证模块,用于验证所述技术参数是否完整和正确,并输出验证结果;,技术参数确认模块,用于确认所述技术参数是否符合产品技术要求,并输出确认结果。,5.如权利要求4所述的工艺建模整合系统,其特征在于,所述技术参数未通过所述系统验证模块的验证,将所述验证结果反馈至所述技术参数导入模块。,6.如权利要求4所述的工艺建模整合系统,其特征在于,所述技术参数未通过所述技术参数确认模块的确认,将所述确认结果反馈至所述技术参数导入模块。,7.一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可执行的指令,其特征在于:当所述计算机可执行的指令被执行时实现权利要求1至6中任意一项所述的工艺建模整合系统。,8.一种计算机设备,其特征在于,包括处理器以及存储设备,所述处理器适于实现各指令,所述存储设备适于存储多条指令,所述指令适于由处理器加载并实现权利要求1至6中任意一项所述的工艺建模整合系统。}</code> |
| <code>1.一种能量分配系统,其特征在于,包括a个双向节点,其中,a为大于等于1的整数,每个所述双向节点配置有一个第一能量输出模块与对应的一个第一能量输入模块,所有所述双向节点之间两两设置有一组开关器件。<br><br>2.根据权利要求1所述的能量分配系统,其特征在于,还包括b个输出节点,其中,b为大于等于0的整数,每个所述输出节点配置有一个第二能量输出模块,每个所述输出节点分别与各个所述双向节点之间设置有一组开关器件。<br><br>3.根据权利要求1或2所述的能量分配系统,其特征在于,还包括c个输入节点,其中,c为大于等于0的整数,每个所述输入节点配置有一个第二能量输入模块,每个所述输入节点分别与各个所述双向节点之间设置有一组开关器件。<br><br>4.根据权利要求1所述的能量分配系统,其特征在于,一组开关器件包括设置在能量输出模块正极与能量输入模块正极之间的第一开关器件、设置在能量输出模块负极与能量输入模块负极之间的第二开关器件。<br><br>5.根据权利要求1所述的能量分配系统,其特征在于,所述能量输出模块处于空闲状态或者占用状态,所述能量输入模块处于供能状态或者停止供能状态。</code> | <code>1.一种直流充电设备的功率分配系统,其特征在于:它包括至少两组充电模块(2)与充电终端(4)和功率分配装置(3),所述功率分配装置(3)分别连接于所述充电模块(2)和充电终端(4)之间,所述充电模块(2)与电网(1)连接;所述功率分配装置(3)包括多个功率分配单元,每个功率分配单元包括输入端口(31)、第一开关模块(32)和输出端口(34),每组充电模块(2)连接一个输入端口(31),所述输出端口(34)连接充电终端(4),所述输入端口(31)和输出端口(34)之间串联一组第一开关模块(32)。<br><br>2.根据权利要求1所述的一种直流充电设备的功率分配系统,其特征在于:所述充电终端(4)的数量不大于充电模块(2)的数量。<br><br>3.根据权利要求1所述的一种直流充电设备的功率分配系统,其特征在于:每个输入端口(31)和对应的输出端口(34)组成一个输出回路,任意两个输出回路之间连接一组第二开关模块(33),若有N个输出回路,则有组第二开关模块(33)用于连接所有的输出回路,N组第一开关模块(32)用于闭合输出回路。<br></code> | <code>{1.一种超多芯光缆的制造方法,其特征在于,包括以下步骤:,1)制得基准束纤或基准束带;所述基准束纤由多根单纤经过绞合后外绕扎纱形成,所述基准束带由多根光纤单带经过绞合后外绕扎纱形成;,2)将多个基准束纤或多个基准束带经过绞合后外绕扎纱得到设定芯数的基准组合束;,3)将基准组合束通过挤塑机挤塑,在外部形成护套,得到超多芯光缆。,2.一种超多芯光缆的制造方法,其特征在于,包括以下步骤:,1)制得基准束纤或基准束带;所述基准束纤由多根单纤经过绞合后外绕扎纱形成,所述基准束带由多根光纤单带经过绞合后外绕扎纱形成;,2)将多个基准束纤或多个基准束带经过绞合后外绕扎纱得到基准组合束;,3)将多个基准组合束经过绞合后外绕扎纱得到设定芯数的组合束;,4)将组合束通过挤塑机挤塑,在外部形成护套,得到超多芯光缆;,步骤3)中组合束的绞合层数为n,每层为S或Z螺旋绞合,且相邻两层之间绞合方向相反。,3.如权利要求2所述的超多芯光缆的制造方法,其特征在于,基准组合束由多个基准束纤经过绞合后外绕扎纱得到,基准组合束的多个基准束纤通过不同的扎纱颜色进行区分,所述基准束纤由12根颜色不同的单纤经过绞合后外绕扎纱形成;步骤3)中多个基准组合束的扎纱颜色各不相同。,4.如权利要求2所述的超多芯光缆的制造方法,其特征在于,基准组合束由多个基准束带经过绞合后外绕扎纱得到,基准组合束的多个基准束带通过不同的扎纱颜色进行区分,所述光纤单带包括6根或12根颜色不同的单纤;步骤3)中多个基准组合束的扎纱颜色各不相同。,5.一种超多芯光缆的制造方法,其特征在于,包括以下步骤:,1)制得基准束纤或基准束带;所述基准束纤由多根单纤经过绞合后外绕扎纱形成,所述基准束带由多根光纤单带经过绞合后外绕扎纱形成;,2)将多个基准束纤或多个基准束带经过绞合后外绕扎纱得到基准组合束;,3)将多个基准组合束经过绞合后外绕扎纱得到组合束;,4)将前面得到的多个组合束经过绞合后外绕扎纱得到芯数更多的组合束,当得到的组合束的芯数满足要求时进行步骤5),否则重复步骤4直至得到设定芯数的组合束;,5)将满足要求的多个组合束经过绞合后外绕扎纱得到缆芯;,6)将缆芯通过挤塑机挤塑,在外部形成护套,得到超多芯光缆;,步骤5)中缆芯的绞合层数为n,每层为S或Z螺旋绞合,且相邻两层之间绞合方向相反。,6.如权利要求5所述的超多芯光缆的制造方法,其特征...</code> |
| <code>1.一种能量分配系统,其特征在于,包括a个双向节点,其中,a为大于等于1的整数,每个所述双向节点配置有一个第一能量输出模块与对应的一个第一能量输入模块,所有所述双向节点之间两两设置有一组开关器件。<br><br>2.根据权利要求1所述的能量分配系统,其特征在于,还包括b个输出节点,其中,b为大于等于0的整数,每个所述输出节点配置有一个第二能量输出模块,每个所述输出节点分别与各个所述双向节点之间设置有一组开关器件。<br><br>3.根据权利要求1或2所述的能量分配系统,其特征在于,还包括c个输入节点,其中,c为大于等于0的整数,每个所述输入节点配置有一个第二能量输入模块,每个所述输入节点分别与各个所述双向节点之间设置有一组开关器件。<br><br>4.根据权利要求1所述的能量分配系统,其特征在于,一组开关器件包括设置在能量输出模块正极与能量输入模块正极之间的第一开关器件、设置在能量输出模块负极与能量输入模块负极之间的第二开关器件。<br><br>5.根据权利要求1所述的能量分配系统,其特征在于,所述能量输出模块处于空闲状态或者占用状态,所述能量输入模块处于供能状态或者停止供能状态。</code> | <code>1.一种直流充电设备的功率分配系统,其特征在于:它包括至少两组充电模块(2)与充电终端(4)和功率分配装置(3),所述功率分配装置(3)分别连接于所述充电模块(2)和充电终端(4)之间,所述充电模块(2)与电网(1)连接;所述功率分配装置(3)包括多个功率分配单元,每个功率分配单元包括输入端口(31)、第一开关模块(32)和输出端口(34),每组充电模块(2)连接一个输入端口(31),所述输出端口(34)连接充电终端(4),所述输入端口(31)和输出端口(34)之间串联一组第一开关模块(32)。<br><br>2.根据权利要求1所述的一种直流充电设备的功率分配系统,其特征在于:所述充电终端(4)的数量不大于充电模块(2)的数量。<br><br>3.根据权利要求1所述的一种直流充电设备的功率分配系统,其特征在于:每个输入端口(31)和对应的输出端口(34)组成一个输出回路,任意两个输出回路之间连接一组第二开关模块(33),若有N个输出回路,则有组第二开关模块(33)用于连接所有的输出回路,N组第一开关模块(32)用于闭合输出回路。<br></code> | <code>{1.一种直线电机单自由度隔振装置的控制方法,其特征在于,所述直线电机单自由度隔振装置包括一个沿X轴方向运动的平衡块(11)、一个防漂移驱动单元以及控制单元;,所述平衡块(11)的上表面与直线电机的定子(12)固连,平衡块(11)的下表面通过第一气浮轴承(4a)与基座(1)连接,平衡块(11)的一侧安装有第一光栅尺(2b),第一光栅尺(2b)的光栅条纹沿X轴方向布置,在直线电机动子(13)上安装有与第一光栅尺(2b)对应的第一光栅读数头(2a);,所述防漂移驱动单元包括一个防漂移直线电机、一个光栅尺和一个导轨(10);导轨(10)与防漂移直线电机的动子(9)相固连,导轨(10)的一端在YZ平面通过第二气浮轴承(4b)与所述的平衡块(11)连接,导轨(10)在XY平面的一侧通过第三气浮轴承(4c)与防漂移直线电机定子(7)连接,导轨(10)在XZ平面的一侧通过第四气浮轴承(4d)与防漂移直线电机的定子(7)连接;防漂移直线电机的定子(7)与基座(1)相固连,导轨(10)的一侧安装有第二光栅尺(5b),第二光栅尺(5b)的光栅条纹沿X轴方向,在防漂移直线电机定子(7)上安装有与第二光栅尺(5b)对应的第二光栅尺读数头(5a);,所述控制单元包括含有控制程序的工控机、光栅计数卡、D/A卡和驱动器,光栅计数卡分别采集第一光栅尺(2b)和第二光栅尺(5b)信号,光栅计数卡将采集到的两路光栅信号输入至工控机,工控机以所述两路光栅信号为位置反馈信号对防漂移直线电机进行控制,控制指令通过D/A卡输出至驱动器;,所述方法包括如下步骤:,1)在伺服周期开始,设定平衡块位移为零,然后采用光栅计数卡采集第二光栅尺(5b)的信号,得到平衡块相对于基座的位移信号,并将该位移信号输入工控机作为位置反馈信号,得到平衡块的位移偏差e,;,2)采用第一非线性环节对位移偏差e,进行处理,第一非线性环节表达式为:,其中e,为平衡块的位移偏差,a,为偏置系数,b,为放大系数,c,为上升速率系数;,3)将第一非线性环节的输出信号通过平衡块线性控制器处理后,得到防漂移电机的控制指令,该控制指令由D/A卡进行数模转换后输入至驱动器,驱动器成比例地输出电流驱动防漂移电机;在下一个伺服周期,重复1)至3)步骤,进而驱动平衡块向设定位置运动。,2.一种直线电机单自由度隔振装置的控制方法,其特征在于,所述直线电机单自由度...</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.COSINE",
"triplet_margin": 0.9
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 1
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.1961 | 500 | 0.6322 |
| 0.3922 | 1000 | 0.6262 |
| 0.5882 | 1500 | 0.6291 |
| 0.7843 | 2000 | 0.6292 |
| 0.9804 | 2500 | 0.6295 |
### Framework Versions
- Python: 3.9.19
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.1.1
- Datasets: 3.5.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
xw17/SmolLM-1.7B-Instruct_finetuned_3_def_lora | xw17 | 2025-04-02T05:31:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-31T01:44:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
UCSC-VLAA/MedReason-Mistral | UCSC-VLAA | 2025-04-02T05:31:14Z | 0 | 0 | null | [
"safetensors",
"arxiv:2504.00993",
"license:apache-2.0",
"region:us"
] | null | 2025-04-02T00:12:38Z | ---
license: apache-2.0
---
# MedReason: Eliciting Factual Medical Reasoning Steps in LLMs via Knowledge Graphs
<p align="center">
📃 <a href="https://arxiv.org/abs/2504.00993" target="_blank">Paper</a> |🤗 <a href="https://huggingface.co/UCSC-VLAA/MedReason-8B" target="_blank">MedReason-8B</a> | 📚 <a href="https://huggingface.co/datasets/UCSC-VLAA/MedReason" target="_blank">MedReason Data</a>
</p>
## ⚡Introduction
**MedReason** is a large-scale high-quality medical reasoning dataset designed to enable faithful and explainable medical problem-solving in large language models (LLMs).
- We utilize a structured medical knowledge graph (KG) to convert clinical QA pairs into logical chains of reasoning, or “thinking paths”.
- Our pipeline generates detailed reasoning for various medical questions from 7 medical datasets, resulting in a dataset of **32,682** question-answer pairs, each with detailed, step-by-step explanations.
- By finetuning with proposed [MedReason dataset](https://huggingface.co/datasets/UCSC-VLAA/MedReason), our best model [MedReason-8B](https://huggingface.co/UCSC-VLAA/MedReason-8B), achieves *state-of-the-art* performance.
We open-sourced our model here.
## 👨⚕️ Model
- **Model Access**
| Model | Base Model | Link |
| ----------------- | ------------------------------------------------------------ | ---------------------------------------------------------- |
| MedReason-8B | [HuatuoGPT-o1-8B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-8B) | [Link](https://huggingface.co/UCSC-VLAA/MedReason-8B) |
| MedReason-Llama | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | [Link](https://huggingface.co/UCSC-VLAA/MedReason-Llama) |
| MedReason-Mistral | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | [Link](https://huggingface.co/UCSC-VLAA/MedReason-Mistral) |
- **Deploy**: we provide a example code for direct inference with MedReason-8B.
Also, MedReason-8B can be deployed with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), we provide code for model deployment using Sglang in `./src/evaluation/eval.py`
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('UCSC-VLAA/MedReason-8B',torch_dtype="auto",device_map="auto", use_safetensors= True)
model.eval()
tokenizer = AutoTokenizer.from_pretrained('UCSC-VLAA/MedReason-8B', trust_remote_code=True, padding_side='left')
input_text = "How to stop a cough?"
messages = [{"role": "user", "content": input_text}]
inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True), return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## 🙏🏼 Acknowledgement
We gratefully acknowledge the inspiring work of [HuatuoGPT-o1](https://github.com/FreedomIntelligence/HuatuoGPT-o1), which laid important groundwork for this research. We also thank the developers of the excellent tools [curator](https://github.com/bespokelabsai/curator/), [trl](https://github.com/huggingface/trl), and [sglang](https://github.com/sgl-project/sglang) for making this work possible.
## 📖 Citation
```
@misc{wu2025medreasonelicitingfactualmedical,
title={MedReason: Eliciting Factual Medical Reasoning Steps in LLMs via Knowledge Graphs},
author={Juncheng Wu and Wenlong Deng and Xingxuan Li and Sheng Liu and Taomian Mi and Yifan Peng and Ziyang Xu and Yi Liu and Hyunjin Cho and Chang-In Choi and Yihan Cao and Hui Ren and Xiang Li and Xiaoxiao Li and Yuyin Zhou},
year={2025},
eprint={2504.00993},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.00993},
}
```
|
swarup3204/gemma-3-1b-pt-ft-dare | swarup3204 | 2025-04-02T05:24:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"base_model:google/gemma-3-1b-pt",
"base_model:merge:google/gemma-3-1b-pt",
"base_model:swarup3204/gemma-3-1b-pt-ft",
"base_model:merge:swarup3204/gemma-3-1b-pt-ft",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-01T08:01:52Z | ---
base_model:
- google/gemma-3-1b-pt
- swarup3204/gemma-3-1b-pt-ft
library_name: transformers
tags:
- mergekit
- merge
---
# model_output_ft
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt) as a base.
### Models Merged
The following models were included in the merge:
* [swarup3204/gemma-3-1b-pt-ft](https://huggingface.co/swarup3204/gemma-3-1b-pt-ft)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: swarup3204/gemma-3-1b-pt-ft
parameters:
weight: 1.0
density: 0.66
merge_method: dare_ties
base_model: google/gemma-3-1b-pt
dtype: bfloat16
parameters:
normalize: false
int8_mask: true
```
|
MinaMila/llama_instbase_unlearned_Adult_1ep_22 | MinaMila | 2025-04-02T05:22:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T05:18:49Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bluesky49/sn80_02APR_05_20 | bluesky49 | 2025-04-02T05:21:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T05:20:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mejurix/medical-legal-embedder | mejurix | 2025-04-02T05:19:09Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"medical",
"legal",
"embedding",
"ner",
"clinical",
"custom_code",
"en",
"dataset:custom",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-01T05:52:30Z | ---
language: en
tags:
- medical
- legal
- embedding
- ner
- clinical
- bert
- transformers
license: mit
datasets:
- custom
metrics:
- cosine similarity
library_name: transformers
pipeline_tag: feature-extraction
---
# Mejurix Medical-Legal Embedding Model
This model is a fine-tuned Transformer (BERT-based) that generates high-quality embeddings for documents in medical and legal domains, with a focus on capturing the semantic relationships between medical and legal concepts. The model leverages NER (Named Entity Recognition) to better understand domain-specific entities and their relationships.
## Model Description
### Model Architecture
- **Base Architecture**: BERT (Bidirectional Encoder Representations from Transformers)
- **Base Model**: medicalai/ClinicalBERT
- **Modifications**:
- Custom embedding projection layer (768 → 256 dimensions)
- NER-enhanced attention mechanism
- Domain-specific fine-tuning
### Key Features
- **Domain-Specific Embeddings**: Optimized for medical and legal text analysis
- **NER-Enhanced Understanding**: Utilizes named entity recognition to improve context awareness
- **Reduced Dimensionality**: 256-dimensional embeddings balance expressiveness and efficiency
- **Cross-Domain Connections**: Effectively captures relationships between medical findings and legal implications
- **Transformer-Based**: Leverages bidirectional attention mechanisms for better context understanding
## Performance Comparison
Our model outperforms other similar domain-specific models:
| Model | Avg Similarity | #Params | Notes |
|:---------------|-----------------:|:----------|:-----------------------|
| **Mejurix (ours)** | **0.9859** | 110M | Medical-legal + NER FT |
| ClinicalBERT | 0.9719 | 110M | No NER, no fine-tuning |
| BioBERT | 0.9640 | 110M | Domain medical only |
| LegalBERT | 0.9508 | 110M | Domain legal only |
The Mejurix model shows superior performance across all relationship types, particularly in cross-domain relationships between medical and legal concepts.
### Detailed Relationship-Type Comparison
Our model demonstrates consistently higher similarity scores across all relationship types compared to other domain-specific models:
| Relationship Type | Mejurix | ClinicalBERT | BioBERT | LegalBERT |
|------------------|---------|--------------|---------|-----------|
| DISEASE_MEDICATION | 0.9966 | 0.9921 | 0.9841 | 0.8514 |
| SEVERITY_PROGNOSIS | 1.0000 | 1.0000 | 1.0000 | 0.8381 |
| SEVERITY_COMPENSATION | 0.9997 | 0.9606 | 0.9713 | 0.8348 |
| DISEASE_TREATMENT | 0.9980 | 0.9778 | 0.9645 | 0.8359 |
| DIAGNOSIS_TREATMENT | 0.9995 | 0.9710 | 0.9703 | 0.8222 |
| LEGAL_SIMILAR_MEDICAL_DIFFERENT | 0.9899 | 0.9699 | 0.9792 | 0.8236 |
| TREATMENT_OUTCOME | 0.9941 | 0.9668 | 0.9745 | 0.8103 |
| OUTCOME_SETTLEMENT | 0.9847 | 0.9631 | 0.9534 | 0.7951 |
| MEDICAL_SIMILAR_LEGAL_DIFFERENT | 0.9936 | 0.9434 | 0.9414 | 0.7812 |
| SYMPTOM_DISEASE | 0.9934 | 0.9690 | 0.9766 | 0.8500 |
The Mejurix model particularly excels in cross-domain relationships such as MEDICAL_SIMILAR_LEGAL_DIFFERENT (0.9936) and SEVERITY_COMPENSATION (0.9997), showing significant improvement over other models in these complex relationship types.

## How to Use This Model
This model is directly available on the Hugging Face Hub and can be used with the Transformers library for feature extraction, sentence embeddings, and similarity calculations.
### Basic Usage with Transformers
```python
import torch
from transformers import AutoModel, AutoTokenizer
# Load model and tokenizer
model_name = "mejurix/medical-legal-embedder" # The model's actual path on Hugging Face Hub
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
# Generate embeddings for a single text
text = "The patient was diagnosed with L3 vertebral fracture, and a compensation claim is in progress."
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
with torch.no_grad():
outputs = model(**inputs)
# Use the [CLS] token embedding for sentence representation
embeddings = outputs.last_hidden_state[:, 0, :] # [CLS] token
print(f"Embedding shape: {embeddings.shape}") # Should be [1, 256]
```
### Using the Model for Similarity Calculation
```python
import torch
import torch.nn.functional as F
from transformers import AutoModel, AutoTokenizer
# Load model and tokenizer
model_name = "mejurix/medical-legal-embedder" # The model's actual path on Hugging Face Hub
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
def get_embedding(text):
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
with torch.no_grad():
outputs = model(**inputs)
return outputs.last_hidden_state[:, 0, :] # [CLS] token embedding
def compute_similarity(text1, text2):
emb1 = get_embedding(text1)
emb2 = get_embedding(text2)
return F.cosine_similarity(emb1, emb2).item()
# Example
text1 = "Diagnosed with L3 spinal fracture."
text2 = "Compensation is needed for lumbar injury."
similarity = compute_similarity(text1, text2)
print(f"Similarity: {similarity:.4f}")
```
### Using with Hugging Face Pipelines
```python
from transformers import pipeline
# Create a feature-extraction pipeline
extractor = pipeline(
"feature-extraction",
model="mejurix/medical-legal-embedder", # The model's actual path on Hugging Face Hub
tokenizer="mejurix/medical-legal-embedder"
)
# Extract features
text = "The patient requires physical therapy following spinal surgery."
features = extractor(text)
# The output is a nested list with shape [1, sequence_length, hidden_size]
```
### Batch Processing
```python
import torch
from transformers import AutoModel, AutoTokenizer
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("mejurix/medical-legal-embedder")
model = AutoModel.from_pretrained("mejurix/medical-legal-embedder")
# Prepare batch of texts
texts = [
"The patient was diagnosed with L3 vertebral fracture",
"Neck pain persisted after the accident",
"Clinical test results were within normal range",
"Compensation claim filed for permanent disability"
]
# Tokenize and get embeddings in a single pass
inputs = tokenizer(texts, padding=True, truncation=True, max_length=128, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# Get CLS token embeddings for each text in the batch
embeddings = outputs.last_hidden_state[:, 0, :]
print(f"Batch embeddings shape: {embeddings.shape}") # Should be [4, 256]
```
## Intended Uses & Limitations
### Intended Uses
- Medical-legal document similarity analysis
- Case relevance assessment
- Document clustering and organization
- Information retrieval in medical and legal domains
- Cross-referencing medical records with legal precedents
- Zero-shot text classification with custom categories
### Limitations
- Limited understanding of negations (current similarity: 0.7791)
- Temporal context differentiation needs improvement
- May not fully distinguish severity levels in medical conditions
- Maximum context length of 512 tokens (inherited from BERT architecture)
## Training and Evaluation
### Training
The model was fine-tuned on a specialized dataset containing medical-legal document pairs with various relationship types (disease-treatment, severity-compensation, etc.). Training employed triplet loss with hard negative mining.
**Training Configuration:**
- Base model: medicalai/ClinicalBERT
- Embedding dimension reduction: 768 → 256
- Dropout: 0.5
- Learning rate: 1e-5
- Batch size: 16
- Weight decay: 0.1
- Triplet margin: 2.0
- Epochs: 15
## Performance Observations
### Strengths
1. **Medical-Legal Cross-Concept Connection**: Effectively connects medical assessments with legal compensation concepts (0.8348)
2. **Medical Terminology Recognition**: Recognizes equivalent medical expressions across different terminologies (0.8414)
3. **Causality Understanding**: Accurately identifies cause-effect relationships (0.8236)
4. **Transformer Attention**: The bidirectional attention mechanism captures contextual relationships effectively
### Areas for Improvement
1. **Detailed Medical Terminology Differentiation**: Needs better recognition of severity differences
2. **Temporal Context Understanding**: Temporal differences in medical conditions need better differentiation
3. **Negation Handling**: Improved handling of negations needed
4. **Longer Context Windows**: Future versions could benefit from extended context length models
## Ethical Considerations
This model should be used as a tool to assist professionals, not as a replacement for medical or legal expertise. Decisions affecting patient care or legal outcomes should not be based solely on this model's output.
## Citation
If you use this model in your research, please cite:
```
@software{mejurix_medicallegal_embedder,
author = {Mejurix},
title = {Mejurix Medical-Legal Embedding Model},
year = {2025},
version = {0.1.0},
url = {https://huggingface.co/mejurix/medical-legal-embedder}
}
```
## License
This project is distributed under the MIT License.
---
# 한국어 문서 / Korean Documentation
# Mejurix 의료-법률 임베딩 모델
본 모델은 의료 및 법률 도메인의 텍스트에 특화된 임베딩을 생성하는 미세 조정된 트랜스포머(BERT 기반) 모델입니다. 의료 및 법률 개념 간의 의미론적 관계를 포착하는 데 중점을 두고 있으며, 개체명 인식(NER)을 활용하여 도메인 특화 엔티티와 그 관계를 더 잘 이해합니다.
## 모델 설명
### 모델 아키텍처
- **기본 아키텍처**: BERT (Bidirectional Encoder Representations from Transformers)
- **기반 모델**: medicalai/ClinicalBERT
- **주요 수정사항**:
- 사용자 정의 임베딩 투영 레이어 (768 → 256 차원)
- NER 강화 어텐션 메커니즘
- 도메인 특화 미세 조정
### 주요 특징
- **도메인 특화 임베딩**: 의료 및 법률 텍스트 분석에 최적화
- **NER 강화 이해**: 개체명 인식을 활용하여 맥락 인식 개선
- **차원 축소**: 256차원 임베딩으로 표현력과 효율성의 균형 유지
- **크로스 도메인 연결**: 의료 소견과 법률적 함의 간의 관계를 효과적으로 포착
- **트랜스포머 기반**: 양방향 어텐션 메커니즘을 활용하여 맥락 이해 향상
## 성능 비교
본 모델은 유사한 도메인 특화 모델들보다 우수한 성능을 보입니다:
| 모델 | 평균 유사도 | 파라미터 수 | 비고 |
|:--------------|------------:|:------------|:------------------------|
| **Mejurix (본 모델)** | **0.9859** | 110M | 의료-법률 + NER 미세 조정 |
| ClinicalBERT | 0.9719 | 110M | NER 없음, 미세 조정 없음 |
| BioBERT | 0.9640 | 110M | 의료 도메인만 특화 |
| LegalBERT | 0.9508 | 110M | 법률 도메인만 특화 |
Mejurix 모델은 모든 관계 유형에서 우수한 성능을 보이며, 특히 의료와 법률 개념 간의 크로스 도메인 관계에서 두드러집니다.
### 관계 유형별 상세 비교
본 모델은 다른 도메인 특화 모델과 비교하여 모든 관계 유형에서 일관되게 높은 유사도 점수를 보여줍니다:
| 관계 유형 | Mejurix | ClinicalBERT | BioBERT | LegalBERT |
|------------------|---------|--------------|---------|-----------|
| DISEASE_MEDICATION (질병-약물) | 0.9966 | 0.9921 | 0.9841 | 0.8514 |
| SEVERITY_PROGNOSIS (중증도-예후) | 1.0000 | 1.0000 | 1.0000 | 0.8381 |
| SEVERITY_COMPENSATION (중증도-보상) | 0.9997 | 0.9606 | 0.9713 | 0.8348 |
| DISEASE_TREATMENT (질병-치료) | 0.9980 | 0.9778 | 0.9645 | 0.8359 |
| DIAGNOSIS_TREATMENT (진단-치료) | 0.9995 | 0.9710 | 0.9703 | 0.8222 |
| LEGAL_SIMILAR_MEDICAL_DIFFERENT (법적 유사-의학적 상이) | 0.9899 | 0.9699 | 0.9792 | 0.8236 |
| TREATMENT_OUTCOME (치료-결과) | 0.9941 | 0.9668 | 0.9745 | 0.8103 |
| OUTCOME_SETTLEMENT (결과-합의) | 0.9847 | 0.9631 | 0.9534 | 0.7951 |
| MEDICAL_SIMILAR_LEGAL_DIFFERENT (의학적 유사-법적 상이) | 0.9936 | 0.9434 | 0.9414 | 0.7812 |
| SYMPTOM_DISEASE (증상-질병) | 0.9934 | 0.9690 | 0.9766 | 0.8500 |
Mejurix 모델은 특히 MEDICAL_SIMILAR_LEGAL_DIFFERENT(0.9936)와 SEVERITY_COMPENSATION(0.9997)과 같은 크로스 도메인 관계에서 탁월한 성능을 보이며, 이러한 복잡한 관계 유형에서 다른 모델보다 큰 개선을 보여줍니다.

## 모델 사용 방법
이 모델은 Hugging Face Hub에서 직접 사용 가능하며, Transformers 라이브러리를 통해 특성 추출, 문장 임베딩 및 유사도 계산에 활용할 수 있습니다.
### Transformers를 사용한 기본 사용법
```python
import torch
from transformers import AutoModel, AutoTokenizer
# 모델 및 토크나이저 로드
model_name = "mejurix/medical-legal-embedder" # Hugging Face Hub에 있는 실제 모델 경로
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
# 단일 텍스트에 대한 임베딩 생성
text = "환자는 L3 척추 골절 진단을 받았으며, 보상 청구가 진행 중입니다."
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
with torch.no_grad():
outputs = model(**inputs)
# 문장 표현에 [CLS] 토큰 임베딩 사용
embeddings = outputs.last_hidden_state[:, 0, :] # [CLS] 토큰
print(f"임베딩 형태: {embeddings.shape}") # [1, 256]이어야 함
```
### 유사도 계산에 모델 사용하기
```python
import torch
import torch.nn.functional as F
from transformers import AutoModel, AutoTokenizer
# 모델 및 토크나이저 로드
model_name = "mejurix/medical-legal-embedder" # Hugging Face Hub에 있는 실제 모델 경로
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
def get_embedding(text):
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
with torch.no_grad():
outputs = model(**inputs)
return outputs.last_hidden_state[:, 0, :] # [CLS] 토큰 임베딩
def compute_similarity(text1, text2):
emb1 = get_embedding(text1)
emb2 = get_embedding(text2)
return F.cosine_similarity(emb1, emb2).item()
# 예시
text1 = "L3 척추 골절 진단을 받았습니다."
text2 = "요추 부상에 대한 보상이 필요합니다."
similarity = compute_similarity(text1, text2)
print(f"유사도: {similarity:.4f}")
```
### Hugging Face 파이프라인 사용하기
```python
from transformers import pipeline
# 특성 추출 파이프라인 생성
extractor = pipeline(
"feature-extraction",
model="mejurix/medical-legal-embedder", # Hugging Face Hub에 있는 실제 모델 경로
tokenizer="mejurix/medical-legal-embedder"
)
# 특성 추출
text = "환자는 척추 수술 후 물리 치료가 필요합니다."
features = extractor(text)
# 출력은 [1, sequence_length, hidden_size] 형태의 중첩된 리스트
```
## 활용 분야 및 한계점
### 활용 분야
- 의료-법률 문서 유사도 분석
- 사례 관련성 평가
- 문서 클러스터링 및 조직화
- 의료 및 법률 도메인에서의 정보 검색
- 의료 기록과 법적 선례의 상호 참조
- 사용자 정의 카테고리를 사용한 제로샷 텍스트 분류
### 한계점
- 부정문에 대한 이해 제한(현재 유사도: 0.7791)
- 시간적 맥락 구분 개선 필요
- 의료 상태의 중증도 수준을 완전히 구분하지 못할 수 있음
- 최대 컨텍스트 길이 512 토큰(BERT 아키텍처에서 상속)
## 학습 및 평가
### 학습
이 모델은 다양한 관계 유형(질병-치료, 중증도-보상 등)을 포함하는 의료-법률 문서 쌍의 특수 데이터셋에서 미세 조정되었습니다. 학습에는 어려운 부정적 사례 마이닝을 통한 트리플렛 손실(triplet loss)이 사용되었습니다.
**학습 구성:**
- 기반 모델: medicalai/ClinicalBERT
- 임베딩 차원 축소: 768 → 256
- 드롭아웃: 0.5
- 학습률: 1e-5
- 배치 크기: 16
- 가중치 감소: 0.1
- 트리플렛 마진: 2.0
- 에폭: 15
## 인용
학술 연구에서 이 모델을 사용하는 경우 다음과 같이 인용해 주세요:
```
@software{mejurix_medicallegal_embedder,
author = {Mejurix},
title = {Mejurix Medical-Legal Embedding Model},
year = {2025},
version = {0.1.0},
url = {https://huggingface.co/mejurix/medical-legal-embedder}
}
```
## 라이선스
이 프로젝트는 MIT 라이선스에 따라 배포됩니다. |
redsgnaoh/model50 | redsgnaoh | 2025-04-02T05:17:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T05:03:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bytejack007/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF | bytejack007 | 2025-04-02T05:16:38Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | sentence-similarity | 2025-04-02T05:16:25Z | ---
base_model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- llama-cpp
- gguf-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 83.98507462686567
- type: ap
value: 50.93015252587014
- type: f1
value: 78.50416599051215
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.61065
- type: ap
value: 94.89174052954196
- type: f1
value: 96.60942596940565
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.614000000000004
- type: f1
value: 54.90553480294904
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 45.164
- type: map_at_10
value: 61.519
- type: map_at_100
value: 61.769
- type: map_at_1000
value: 61.769
- type: map_at_3
value: 57.443999999999996
- type: map_at_5
value: 60.058
- type: mrr_at_1
value: 46.088
- type: mrr_at_10
value: 61.861
- type: mrr_at_100
value: 62.117999999999995
- type: mrr_at_1000
value: 62.117999999999995
- type: mrr_at_3
value: 57.729
- type: mrr_at_5
value: 60.392
- type: ndcg_at_1
value: 45.164
- type: ndcg_at_10
value: 69.72
- type: ndcg_at_100
value: 70.719
- type: ndcg_at_1000
value: 70.719
- type: ndcg_at_3
value: 61.517999999999994
- type: ndcg_at_5
value: 66.247
- type: precision_at_1
value: 45.164
- type: precision_at_10
value: 9.545
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.443
- type: precision_at_5
value: 16.97
- type: recall_at_1
value: 45.164
- type: recall_at_10
value: 95.448
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 73.329
- type: recall_at_5
value: 84.851
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 50.511868162026175
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 45.007803189284004
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.55292107723382
- type: mrr
value: 77.66158818097877
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.65459047085452
- type: cos_sim_spearman
value: 82.10729255710761
- type: euclidean_pearson
value: 82.78079159312476
- type: euclidean_spearman
value: 80.50002701880933
- type: manhattan_pearson
value: 82.41372641383016
- type: manhattan_spearman
value: 80.57412509272639
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.30844155844156
- type: f1
value: 87.25307322443255
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 43.20754608934859
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 38.818037697335505
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 35.423
- type: map_at_10
value: 47.198
- type: map_at_100
value: 48.899
- type: map_at_1000
value: 49.004
- type: map_at_3
value: 43.114999999999995
- type: map_at_5
value: 45.491
- type: mrr_at_1
value: 42.918
- type: mrr_at_10
value: 53.299
- type: mrr_at_100
value: 54.032000000000004
- type: mrr_at_1000
value: 54.055
- type: mrr_at_3
value: 50.453
- type: mrr_at_5
value: 52.205999999999996
- type: ndcg_at_1
value: 42.918
- type: ndcg_at_10
value: 53.98
- type: ndcg_at_100
value: 59.57
- type: ndcg_at_1000
value: 60.879000000000005
- type: ndcg_at_3
value: 48.224000000000004
- type: ndcg_at_5
value: 50.998
- type: precision_at_1
value: 42.918
- type: precision_at_10
value: 10.299999999999999
- type: precision_at_100
value: 1.687
- type: precision_at_1000
value: 0.211
- type: precision_at_3
value: 22.842000000000002
- type: precision_at_5
value: 16.681
- type: recall_at_1
value: 35.423
- type: recall_at_10
value: 66.824
- type: recall_at_100
value: 89.564
- type: recall_at_1000
value: 97.501
- type: recall_at_3
value: 50.365
- type: recall_at_5
value: 57.921
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 33.205
- type: map_at_10
value: 44.859
- type: map_at_100
value: 46.135
- type: map_at_1000
value: 46.259
- type: map_at_3
value: 41.839
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 41.146
- type: mrr_at_10
value: 50.621
- type: mrr_at_100
value: 51.207
- type: mrr_at_1000
value: 51.246
- type: mrr_at_3
value: 48.535000000000004
- type: mrr_at_5
value: 49.818
- type: ndcg_at_1
value: 41.146
- type: ndcg_at_10
value: 50.683
- type: ndcg_at_100
value: 54.82
- type: ndcg_at_1000
value: 56.69
- type: ndcg_at_3
value: 46.611000000000004
- type: ndcg_at_5
value: 48.66
- type: precision_at_1
value: 41.146
- type: precision_at_10
value: 9.439
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 22.59
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 33.205
- type: recall_at_10
value: 61.028999999999996
- type: recall_at_100
value: 78.152
- type: recall_at_1000
value: 89.59700000000001
- type: recall_at_3
value: 49.05
- type: recall_at_5
value: 54.836
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 41.637
- type: map_at_10
value: 55.162
- type: map_at_100
value: 56.142
- type: map_at_1000
value: 56.188
- type: map_at_3
value: 51.564
- type: map_at_5
value: 53.696
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 58.243
- type: mrr_at_100
value: 58.879999999999995
- type: mrr_at_1000
value: 58.9
- type: mrr_at_3
value: 55.69499999999999
- type: mrr_at_5
value: 57.284
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 61.305
- type: ndcg_at_100
value: 65.077
- type: ndcg_at_1000
value: 65.941
- type: ndcg_at_3
value: 55.422000000000004
- type: ndcg_at_5
value: 58.516
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 9.918000000000001
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.765
- type: precision_at_5
value: 17.204
- type: recall_at_1
value: 41.637
- type: recall_at_10
value: 76.185
- type: recall_at_100
value: 92.149
- type: recall_at_1000
value: 98.199
- type: recall_at_3
value: 60.856
- type: recall_at_5
value: 68.25099999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 26.27
- type: map_at_10
value: 37.463
- type: map_at_100
value: 38.434000000000005
- type: map_at_1000
value: 38.509
- type: map_at_3
value: 34.226
- type: map_at_5
value: 36.161
- type: mrr_at_1
value: 28.588
- type: mrr_at_10
value: 39.383
- type: mrr_at_100
value: 40.23
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 36.422
- type: mrr_at_5
value: 38.252
- type: ndcg_at_1
value: 28.588
- type: ndcg_at_10
value: 43.511
- type: ndcg_at_100
value: 48.274
- type: ndcg_at_1000
value: 49.975
- type: ndcg_at_3
value: 37.319
- type: ndcg_at_5
value: 40.568
- type: precision_at_1
value: 28.588
- type: precision_at_10
value: 6.893000000000001
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.347
- type: precision_at_5
value: 11.661000000000001
- type: recall_at_1
value: 26.27
- type: recall_at_10
value: 60.284000000000006
- type: recall_at_100
value: 81.902
- type: recall_at_1000
value: 94.43
- type: recall_at_3
value: 43.537
- type: recall_at_5
value: 51.475
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 18.168
- type: map_at_10
value: 28.410000000000004
- type: map_at_100
value: 29.78
- type: map_at_1000
value: 29.892999999999997
- type: map_at_3
value: 25.238
- type: map_at_5
value: 26.96
- type: mrr_at_1
value: 23.507
- type: mrr_at_10
value: 33.382
- type: mrr_at_100
value: 34.404
- type: mrr_at_1000
value: 34.467999999999996
- type: mrr_at_3
value: 30.637999999999998
- type: mrr_at_5
value: 32.199
- type: ndcg_at_1
value: 23.507
- type: ndcg_at_10
value: 34.571000000000005
- type: ndcg_at_100
value: 40.663
- type: ndcg_at_1000
value: 43.236000000000004
- type: ndcg_at_3
value: 29.053
- type: ndcg_at_5
value: 31.563999999999997
- type: precision_at_1
value: 23.507
- type: precision_at_10
value: 6.654
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 14.427999999999999
- type: precision_at_5
value: 10.498000000000001
- type: recall_at_1
value: 18.168
- type: recall_at_10
value: 48.443000000000005
- type: recall_at_100
value: 74.47
- type: recall_at_1000
value: 92.494
- type: recall_at_3
value: 33.379999999999995
- type: recall_at_5
value: 39.76
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 32.39
- type: map_at_10
value: 44.479
- type: map_at_100
value: 45.977000000000004
- type: map_at_1000
value: 46.087
- type: map_at_3
value: 40.976
- type: map_at_5
value: 43.038
- type: mrr_at_1
value: 40.135
- type: mrr_at_10
value: 50.160000000000004
- type: mrr_at_100
value: 51.052
- type: mrr_at_1000
value: 51.087
- type: mrr_at_3
value: 47.818
- type: mrr_at_5
value: 49.171
- type: ndcg_at_1
value: 40.135
- type: ndcg_at_10
value: 50.731
- type: ndcg_at_100
value: 56.452000000000005
- type: ndcg_at_1000
value: 58.123000000000005
- type: ndcg_at_3
value: 45.507
- type: ndcg_at_5
value: 48.11
- type: precision_at_1
value: 40.135
- type: precision_at_10
value: 9.192
- type: precision_at_100
value: 1.397
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 21.816
- type: precision_at_5
value: 15.476
- type: recall_at_1
value: 32.39
- type: recall_at_10
value: 63.597
- type: recall_at_100
value: 86.737
- type: recall_at_1000
value: 97.039
- type: recall_at_3
value: 48.906
- type: recall_at_5
value: 55.659000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.397
- type: map_at_10
value: 39.871
- type: map_at_100
value: 41.309000000000005
- type: map_at_1000
value: 41.409
- type: map_at_3
value: 36.047000000000004
- type: map_at_5
value: 38.104
- type: mrr_at_1
value: 34.703
- type: mrr_at_10
value: 44.773
- type: mrr_at_100
value: 45.64
- type: mrr_at_1000
value: 45.678999999999995
- type: mrr_at_3
value: 41.705
- type: mrr_at_5
value: 43.406
- type: ndcg_at_1
value: 34.703
- type: ndcg_at_10
value: 46.271
- type: ndcg_at_100
value: 52.037
- type: ndcg_at_1000
value: 53.81700000000001
- type: ndcg_at_3
value: 39.966
- type: ndcg_at_5
value: 42.801
- type: precision_at_1
value: 34.703
- type: precision_at_10
value: 8.744
- type: precision_at_100
value: 1.348
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.102
- type: precision_at_5
value: 13.836
- type: recall_at_1
value: 28.397
- type: recall_at_10
value: 60.299
- type: recall_at_100
value: 84.595
- type: recall_at_1000
value: 96.155
- type: recall_at_3
value: 43.065
- type: recall_at_5
value: 50.371
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.044333333333338
- type: map_at_10
value: 38.78691666666666
- type: map_at_100
value: 40.113
- type: map_at_1000
value: 40.22125
- type: map_at_3
value: 35.52966666666667
- type: map_at_5
value: 37.372749999999996
- type: mrr_at_1
value: 33.159083333333335
- type: mrr_at_10
value: 42.913583333333335
- type: mrr_at_100
value: 43.7845
- type: mrr_at_1000
value: 43.830333333333336
- type: mrr_at_3
value: 40.29816666666667
- type: mrr_at_5
value: 41.81366666666667
- type: ndcg_at_1
value: 33.159083333333335
- type: ndcg_at_10
value: 44.75750000000001
- type: ndcg_at_100
value: 50.13658333333334
- type: ndcg_at_1000
value: 52.037
- type: ndcg_at_3
value: 39.34258333333334
- type: ndcg_at_5
value: 41.93708333333333
- type: precision_at_1
value: 33.159083333333335
- type: precision_at_10
value: 7.952416666666667
- type: precision_at_100
value: 1.2571666666666668
- type: precision_at_1000
value: 0.16099999999999998
- type: precision_at_3
value: 18.303833333333337
- type: precision_at_5
value: 13.057083333333333
- type: recall_at_1
value: 28.044333333333338
- type: recall_at_10
value: 58.237249999999996
- type: recall_at_100
value: 81.35391666666666
- type: recall_at_1000
value: 94.21283333333334
- type: recall_at_3
value: 43.32341666666667
- type: recall_at_5
value: 49.94908333333333
- type: map_at_1
value: 18.398
- type: map_at_10
value: 27.929
- type: map_at_100
value: 29.032999999999998
- type: map_at_1000
value: 29.126
- type: map_at_3
value: 25.070999999999998
- type: map_at_5
value: 26.583000000000002
- type: mrr_at_1
value: 19.963
- type: mrr_at_10
value: 29.997
- type: mrr_at_100
value: 30.9
- type: mrr_at_1000
value: 30.972
- type: mrr_at_3
value: 27.264
- type: mrr_at_5
value: 28.826
- type: ndcg_at_1
value: 19.963
- type: ndcg_at_10
value: 33.678999999999995
- type: ndcg_at_100
value: 38.931
- type: ndcg_at_1000
value: 41.379
- type: ndcg_at_3
value: 28.000000000000004
- type: ndcg_at_5
value: 30.637999999999998
- type: precision_at_1
value: 19.963
- type: precision_at_10
value: 5.7299999999999995
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.631
- type: precision_at_5
value: 9.057
- type: recall_at_1
value: 18.398
- type: recall_at_10
value: 49.254
- type: recall_at_100
value: 73.182
- type: recall_at_1000
value: 91.637
- type: recall_at_3
value: 34.06
- type: recall_at_5
value: 40.416000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 27.838
- type: map_at_10
value: 36.04
- type: map_at_100
value: 37.113
- type: map_at_1000
value: 37.204
- type: map_at_3
value: 33.585
- type: map_at_5
value: 34.845
- type: mrr_at_1
value: 30.982
- type: mrr_at_10
value: 39.105000000000004
- type: mrr_at_100
value: 39.98
- type: mrr_at_1000
value: 40.042
- type: mrr_at_3
value: 36.912
- type: mrr_at_5
value: 38.062000000000005
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 40.982
- type: ndcg_at_100
value: 46.092
- type: ndcg_at_1000
value: 48.25
- type: ndcg_at_3
value: 36.41
- type: ndcg_at_5
value: 38.379999999999995
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 6.534
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.828
- type: recall_at_1
value: 27.838
- type: recall_at_10
value: 52.971000000000004
- type: recall_at_100
value: 76.357
- type: recall_at_1000
value: 91.973
- type: recall_at_3
value: 40.157
- type: recall_at_5
value: 45.147999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 19.059
- type: map_at_10
value: 27.454
- type: map_at_100
value: 28.736
- type: map_at_1000
value: 28.865000000000002
- type: map_at_3
value: 24.773999999999997
- type: map_at_5
value: 26.266000000000002
- type: mrr_at_1
value: 23.125
- type: mrr_at_10
value: 31.267
- type: mrr_at_100
value: 32.32
- type: mrr_at_1000
value: 32.394
- type: mrr_at_3
value: 28.894
- type: mrr_at_5
value: 30.281000000000002
- type: ndcg_at_1
value: 23.125
- type: ndcg_at_10
value: 32.588
- type: ndcg_at_100
value: 38.432
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 27.938000000000002
- type: ndcg_at_5
value: 30.127
- type: precision_at_1
value: 23.125
- type: precision_at_10
value: 5.9639999999999995
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 13.294
- type: precision_at_5
value: 9.628
- type: recall_at_1
value: 19.059
- type: recall_at_10
value: 44.25
- type: recall_at_100
value: 69.948
- type: recall_at_1000
value: 89.35300000000001
- type: recall_at_3
value: 31.114000000000004
- type: recall_at_5
value: 36.846000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 28.355999999999998
- type: map_at_10
value: 39.055
- type: map_at_100
value: 40.486
- type: map_at_1000
value: 40.571
- type: map_at_3
value: 35.69
- type: map_at_5
value: 37.605
- type: mrr_at_1
value: 33.302
- type: mrr_at_10
value: 42.986000000000004
- type: mrr_at_100
value: 43.957
- type: mrr_at_1000
value: 43.996
- type: mrr_at_3
value: 40.111999999999995
- type: mrr_at_5
value: 41.735
- type: ndcg_at_1
value: 33.302
- type: ndcg_at_10
value: 44.962999999999994
- type: ndcg_at_100
value: 50.917
- type: ndcg_at_1000
value: 52.622
- type: ndcg_at_3
value: 39.182
- type: ndcg_at_5
value: 41.939
- type: precision_at_1
value: 33.302
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.035
- type: precision_at_5
value: 12.873000000000001
- type: recall_at_1
value: 28.355999999999998
- type: recall_at_10
value: 58.782000000000004
- type: recall_at_100
value: 84.02199999999999
- type: recall_at_1000
value: 95.511
- type: recall_at_3
value: 43.126999999999995
- type: recall_at_5
value: 50.14999999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.391
- type: map_at_10
value: 37.523
- type: map_at_100
value: 39.312000000000005
- type: map_at_1000
value: 39.54
- type: map_at_3
value: 34.231
- type: map_at_5
value: 36.062
- type: mrr_at_1
value: 32.016
- type: mrr_at_10
value: 41.747
- type: mrr_at_100
value: 42.812
- type: mrr_at_1000
value: 42.844
- type: mrr_at_3
value: 39.129999999999995
- type: mrr_at_5
value: 40.524
- type: ndcg_at_1
value: 32.016
- type: ndcg_at_10
value: 43.826
- type: ndcg_at_100
value: 50.373999999999995
- type: ndcg_at_1000
value: 52.318
- type: ndcg_at_3
value: 38.479
- type: ndcg_at_5
value: 40.944
- type: precision_at_1
value: 32.016
- type: precision_at_10
value: 8.280999999999999
- type: precision_at_100
value: 1.6760000000000002
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 18.05
- type: precision_at_5
value: 13.083
- type: recall_at_1
value: 27.391
- type: recall_at_10
value: 56.928999999999995
- type: recall_at_100
value: 85.169
- type: recall_at_1000
value: 96.665
- type: recall_at_3
value: 42.264
- type: recall_at_5
value: 48.556
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 19.681
- type: map_at_10
value: 32.741
- type: map_at_100
value: 34.811
- type: map_at_1000
value: 35.003
- type: map_at_3
value: 27.697
- type: map_at_5
value: 30.372
- type: mrr_at_1
value: 44.951
- type: mrr_at_10
value: 56.34400000000001
- type: mrr_at_100
value: 56.961
- type: mrr_at_1000
value: 56.987
- type: mrr_at_3
value: 53.681
- type: mrr_at_5
value: 55.407
- type: ndcg_at_1
value: 44.951
- type: ndcg_at_10
value: 42.905
- type: ndcg_at_100
value: 49.95
- type: ndcg_at_1000
value: 52.917
- type: ndcg_at_3
value: 36.815
- type: ndcg_at_5
value: 38.817
- type: precision_at_1
value: 44.951
- type: precision_at_10
value: 12.989999999999998
- type: precision_at_100
value: 2.068
- type: precision_at_1000
value: 0.263
- type: precision_at_3
value: 27.275
- type: precision_at_5
value: 20.365
- type: recall_at_1
value: 19.681
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 71.87400000000001
- type: recall_at_1000
value: 87.929
- type: recall_at_3
value: 32.653999999999996
- type: recall_at_5
value: 39.364
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 10.231
- type: map_at_10
value: 22.338
- type: map_at_100
value: 31.927
- type: map_at_1000
value: 33.87
- type: map_at_3
value: 15.559999999999999
- type: map_at_5
value: 18.239
- type: mrr_at_1
value: 75.0
- type: mrr_at_10
value: 81.303
- type: mrr_at_100
value: 81.523
- type: mrr_at_1000
value: 81.53
- type: mrr_at_3
value: 80.083
- type: mrr_at_5
value: 80.758
- type: ndcg_at_1
value: 64.625
- type: ndcg_at_10
value: 48.687000000000005
- type: ndcg_at_100
value: 52.791
- type: ndcg_at_1000
value: 60.041999999999994
- type: ndcg_at_3
value: 53.757999999999996
- type: ndcg_at_5
value: 50.76500000000001
- type: precision_at_1
value: 75.0
- type: precision_at_10
value: 38.3
- type: precision_at_100
value: 12.025
- type: precision_at_1000
value: 2.3970000000000002
- type: precision_at_3
value: 55.417
- type: precision_at_5
value: 47.5
- type: recall_at_1
value: 10.231
- type: recall_at_10
value: 27.697
- type: recall_at_100
value: 57.409
- type: recall_at_1000
value: 80.547
- type: recall_at_3
value: 16.668
- type: recall_at_5
value: 20.552
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 61.365
- type: f1
value: 56.7540827912991
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 83.479
- type: map_at_10
value: 88.898
- type: map_at_100
value: 89.11
- type: map_at_1000
value: 89.12400000000001
- type: map_at_3
value: 88.103
- type: map_at_5
value: 88.629
- type: mrr_at_1
value: 89.934
- type: mrr_at_10
value: 93.91000000000001
- type: mrr_at_100
value: 93.937
- type: mrr_at_1000
value: 93.938
- type: mrr_at_3
value: 93.62700000000001
- type: mrr_at_5
value: 93.84599999999999
- type: ndcg_at_1
value: 89.934
- type: ndcg_at_10
value: 91.574
- type: ndcg_at_100
value: 92.238
- type: ndcg_at_1000
value: 92.45
- type: ndcg_at_3
value: 90.586
- type: ndcg_at_5
value: 91.16300000000001
- type: precision_at_1
value: 89.934
- type: precision_at_10
value: 10.555
- type: precision_at_100
value: 1.1159999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.588
- type: precision_at_5
value: 20.642
- type: recall_at_1
value: 83.479
- type: recall_at_10
value: 94.971
- type: recall_at_100
value: 97.397
- type: recall_at_1000
value: 98.666
- type: recall_at_3
value: 92.24799999999999
- type: recall_at_5
value: 93.797
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 27.16
- type: map_at_10
value: 45.593
- type: map_at_100
value: 47.762
- type: map_at_1000
value: 47.899
- type: map_at_3
value: 39.237
- type: map_at_5
value: 42.970000000000006
- type: mrr_at_1
value: 52.623
- type: mrr_at_10
value: 62.637
- type: mrr_at_100
value: 63.169
- type: mrr_at_1000
value: 63.185
- type: mrr_at_3
value: 59.928000000000004
- type: mrr_at_5
value: 61.702999999999996
- type: ndcg_at_1
value: 52.623
- type: ndcg_at_10
value: 54.701
- type: ndcg_at_100
value: 61.263
- type: ndcg_at_1000
value: 63.134
- type: ndcg_at_3
value: 49.265
- type: ndcg_at_5
value: 51.665000000000006
- type: precision_at_1
value: 52.623
- type: precision_at_10
value: 15.185
- type: precision_at_100
value: 2.202
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 32.767
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.16
- type: recall_at_10
value: 63.309000000000005
- type: recall_at_100
value: 86.722
- type: recall_at_1000
value: 97.505
- type: recall_at_3
value: 45.045
- type: recall_at_5
value: 54.02400000000001
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 42.573
- type: map_at_10
value: 59.373
- type: map_at_100
value: 60.292
- type: map_at_1000
value: 60.358999999999995
- type: map_at_3
value: 56.159000000000006
- type: map_at_5
value: 58.123999999999995
- type: mrr_at_1
value: 85.14500000000001
- type: mrr_at_10
value: 89.25999999999999
- type: mrr_at_100
value: 89.373
- type: mrr_at_1000
value: 89.377
- type: mrr_at_3
value: 88.618
- type: mrr_at_5
value: 89.036
- type: ndcg_at_1
value: 85.14500000000001
- type: ndcg_at_10
value: 68.95
- type: ndcg_at_100
value: 71.95
- type: ndcg_at_1000
value: 73.232
- type: ndcg_at_3
value: 64.546
- type: ndcg_at_5
value: 66.945
- type: precision_at_1
value: 85.14500000000001
- type: precision_at_10
value: 13.865
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 39.703
- type: precision_at_5
value: 25.718000000000004
- type: recall_at_1
value: 42.573
- type: recall_at_10
value: 69.325
- type: recall_at_100
value: 80.932
- type: recall_at_1000
value: 89.446
- type: recall_at_3
value: 59.553999999999995
- type: recall_at_5
value: 64.294
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 95.8336
- type: ap
value: 93.78862962194073
- type: f1
value: 95.83192650728371
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 23.075000000000003
- type: map_at_10
value: 36.102000000000004
- type: map_at_100
value: 37.257
- type: map_at_1000
value: 37.3
- type: map_at_3
value: 32.144
- type: map_at_5
value: 34.359
- type: mrr_at_1
value: 23.711
- type: mrr_at_10
value: 36.671
- type: mrr_at_100
value: 37.763999999999996
- type: mrr_at_1000
value: 37.801
- type: mrr_at_3
value: 32.775
- type: mrr_at_5
value: 34.977000000000004
- type: ndcg_at_1
value: 23.711
- type: ndcg_at_10
value: 43.361
- type: ndcg_at_100
value: 48.839
- type: ndcg_at_1000
value: 49.88
- type: ndcg_at_3
value: 35.269
- type: ndcg_at_5
value: 39.224
- type: precision_at_1
value: 23.711
- type: precision_at_10
value: 6.866999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.083
- type: recall_at_1
value: 23.075000000000003
- type: recall_at_10
value: 65.756
- type: recall_at_100
value: 90.88199999999999
- type: recall_at_1000
value: 98.739
- type: recall_at_3
value: 43.691
- type: recall_at_5
value: 53.15800000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 97.69493844049248
- type: f1
value: 97.55048089616261
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 88.75968992248062
- type: f1
value: 72.26321223399123
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 82.40080699394754
- type: f1
value: 79.62590029057968
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 84.49562878278414
- type: f1
value: 84.0040193313333
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 39.386760057101945
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 37.89687154075537
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.94151656057482
- type: mrr
value: 35.32684700746953
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.239999999999999
- type: map_at_10
value: 14.862
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.694000000000003
- type: map_at_3
value: 10.683
- type: map_at_5
value: 12.674
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 59.697
- type: mrr_at_100
value: 60.095
- type: mrr_at_1000
value: 60.129999999999995
- type: mrr_at_3
value: 58.35900000000001
- type: mrr_at_5
value: 58.839
- type: ndcg_at_1
value: 48.452
- type: ndcg_at_10
value: 39.341
- type: ndcg_at_100
value: 35.866
- type: ndcg_at_1000
value: 45.111000000000004
- type: ndcg_at_3
value: 44.527
- type: ndcg_at_5
value: 42.946
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 29.536
- type: precision_at_100
value: 9.142
- type: precision_at_1000
value: 2.2849999999999997
- type: precision_at_3
value: 41.899
- type: precision_at_5
value: 37.647000000000006
- type: recall_at_1
value: 6.239999999999999
- type: recall_at_10
value: 19.278000000000002
- type: recall_at_100
value: 36.074
- type: recall_at_1000
value: 70.017
- type: recall_at_3
value: 12.066
- type: recall_at_5
value: 15.254000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 39.75
- type: map_at_10
value: 56.443
- type: map_at_100
value: 57.233999999999995
- type: map_at_1000
value: 57.249
- type: map_at_3
value: 52.032999999999994
- type: map_at_5
value: 54.937999999999995
- type: mrr_at_1
value: 44.728
- type: mrr_at_10
value: 58.939
- type: mrr_at_100
value: 59.489000000000004
- type: mrr_at_1000
value: 59.499
- type: mrr_at_3
value: 55.711999999999996
- type: mrr_at_5
value: 57.89
- type: ndcg_at_1
value: 44.728
- type: ndcg_at_10
value: 63.998999999999995
- type: ndcg_at_100
value: 67.077
- type: ndcg_at_1000
value: 67.40899999999999
- type: ndcg_at_3
value: 56.266000000000005
- type: ndcg_at_5
value: 60.88
- type: precision_at_1
value: 44.728
- type: precision_at_10
value: 10.09
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.145
- type: precision_at_5
value: 17.822
- type: recall_at_1
value: 39.75
- type: recall_at_10
value: 84.234
- type: recall_at_100
value: 97.055
- type: recall_at_1000
value: 99.517
- type: recall_at_3
value: 64.851
- type: recall_at_5
value: 75.343
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.085
- type: map_at_10
value: 86.107
- type: map_at_100
value: 86.727
- type: map_at_1000
value: 86.74
- type: map_at_3
value: 83.21
- type: map_at_5
value: 85.06
- type: mrr_at_1
value: 82.94
- type: mrr_at_10
value: 88.845
- type: mrr_at_100
value: 88.926
- type: mrr_at_1000
value: 88.927
- type: mrr_at_3
value: 87.993
- type: mrr_at_5
value: 88.62299999999999
- type: ndcg_at_1
value: 82.97
- type: ndcg_at_10
value: 89.645
- type: ndcg_at_100
value: 90.717
- type: ndcg_at_1000
value: 90.78
- type: ndcg_at_3
value: 86.99900000000001
- type: ndcg_at_5
value: 88.52600000000001
- type: precision_at_1
value: 82.97
- type: precision_at_10
value: 13.569
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.043
- type: precision_at_5
value: 24.992
- type: recall_at_1
value: 72.085
- type: recall_at_10
value: 96.262
- type: recall_at_100
value: 99.77000000000001
- type: recall_at_1000
value: 99.997
- type: recall_at_3
value: 88.652
- type: recall_at_5
value: 93.01899999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82153952668092
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.094465801879295
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.688
- type: map_at_10
value: 15.201999999999998
- type: map_at_100
value: 18.096
- type: map_at_1000
value: 18.481
- type: map_at_3
value: 10.734
- type: map_at_5
value: 12.94
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 41.101
- type: mrr_at_100
value: 42.202
- type: mrr_at_1000
value: 42.228
- type: mrr_at_3
value: 37.683
- type: mrr_at_5
value: 39.708
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.976000000000003
- type: ndcg_at_100
value: 35.129
- type: ndcg_at_1000
value: 40.77
- type: ndcg_at_3
value: 23.787
- type: ndcg_at_5
value: 20.816000000000003
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 13.04
- type: precision_at_100
value: 2.761
- type: precision_at_1000
value: 0.41000000000000003
- type: precision_at_3
value: 22.6
- type: precision_at_5
value: 18.52
- type: recall_at_1
value: 5.688
- type: recall_at_10
value: 26.43
- type: recall_at_100
value: 56.02
- type: recall_at_1000
value: 83.21
- type: recall_at_3
value: 13.752
- type: recall_at_5
value: 18.777
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.15084859283178
- type: cos_sim_spearman
value: 80.49030614009419
- type: euclidean_pearson
value: 81.84574978672468
- type: euclidean_spearman
value: 79.89787150656818
- type: manhattan_pearson
value: 81.63076538567131
- type: manhattan_spearman
value: 79.69867352121841
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.64097921490992
- type: cos_sim_spearman
value: 77.25370084896514
- type: euclidean_pearson
value: 82.71210826468788
- type: euclidean_spearman
value: 78.50445584994826
- type: manhattan_pearson
value: 82.92580164330298
- type: manhattan_spearman
value: 78.69686891301019
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 87.24596417308994
- type: cos_sim_spearman
value: 87.79454220555091
- type: euclidean_pearson
value: 87.40242561671164
- type: euclidean_spearman
value: 88.25955597373556
- type: manhattan_pearson
value: 87.25160240485849
- type: manhattan_spearman
value: 88.155794979818
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.44914233422564
- type: cos_sim_spearman
value: 82.91015471820322
- type: euclidean_pearson
value: 84.7206656630327
- type: euclidean_spearman
value: 83.86408872059216
- type: manhattan_pearson
value: 84.72816725158454
- type: manhattan_spearman
value: 84.01603388572788
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.6168026237477
- type: cos_sim_spearman
value: 88.45414278092397
- type: euclidean_pearson
value: 88.57023240882022
- type: euclidean_spearman
value: 89.04102190922094
- type: manhattan_pearson
value: 88.66695535796354
- type: manhattan_spearman
value: 89.19898476680969
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.27925826089424
- type: cos_sim_spearman
value: 85.45291099550461
- type: euclidean_pearson
value: 83.63853036580834
- type: euclidean_spearman
value: 84.33468035821484
- type: manhattan_pearson
value: 83.72778773251596
- type: manhattan_spearman
value: 84.51583132445376
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.67375185692552
- type: cos_sim_spearman
value: 90.32542469203855
- type: euclidean_pearson
value: 89.63513717951847
- type: euclidean_spearman
value: 89.87760271003745
- type: manhattan_pearson
value: 89.28381452982924
- type: manhattan_spearman
value: 89.53568197785721
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.24644693819846
- type: cos_sim_spearman
value: 66.09889420525377
- type: euclidean_pearson
value: 63.72551583520747
- type: euclidean_spearman
value: 63.01385470780679
- type: manhattan_pearson
value: 64.09258157214097
- type: manhattan_spearman
value: 63.080517752822594
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.27321463839989
- type: cos_sim_spearman
value: 86.37572865993327
- type: euclidean_pearson
value: 86.36268020198149
- type: euclidean_spearman
value: 86.31089339478922
- type: manhattan_pearson
value: 86.4260445761947
- type: manhattan_spearman
value: 86.45885895320457
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.52456702387798
- type: mrr
value: 96.34556529164372
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.99400000000001
- type: map_at_10
value: 73.38799999999999
- type: map_at_100
value: 73.747
- type: map_at_1000
value: 73.75
- type: map_at_3
value: 70.04599999999999
- type: map_at_5
value: 72.095
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 74.42800000000001
- type: mrr_at_100
value: 74.722
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.056
- type: mrr_at_5
value: 73.60600000000001
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 78.435
- type: ndcg_at_100
value: 79.922
- type: ndcg_at_1000
value: 80.00500000000001
- type: ndcg_at_3
value: 73.05199999999999
- type: ndcg_at_5
value: 75.98
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.5
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.555999999999997
- type: precision_at_5
value: 19.0
- type: recall_at_1
value: 61.99400000000001
- type: recall_at_10
value: 92.72200000000001
- type: recall_at_100
value: 99.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.739
- type: recall_at_5
value: 85.828
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 95.3203137438653
- type: cos_sim_f1
value: 89.12386706948641
- type: cos_sim_precision
value: 89.75659229208925
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.67821782178218
- type: dot_ap
value: 89.94069840000675
- type: dot_f1
value: 83.45902463549521
- type: dot_precision
value: 83.9231547017189
- type: dot_recall
value: 83.0
- type: euclidean_accuracy
value: 99.78613861386138
- type: euclidean_ap
value: 95.10648259135526
- type: euclidean_f1
value: 88.77338877338877
- type: euclidean_precision
value: 92.42424242424242
- type: euclidean_recall
value: 85.39999999999999
- type: manhattan_accuracy
value: 99.7950495049505
- type: manhattan_ap
value: 95.29987661320946
- type: manhattan_f1
value: 89.21313183949972
- type: manhattan_precision
value: 93.14472252448314
- type: manhattan_recall
value: 85.6
- type: max_accuracy
value: 99.7950495049505
- type: max_ap
value: 95.3203137438653
- type: max_f1
value: 89.21313183949972
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 67.65446577183913
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 46.30749237193961
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.91481849959949
- type: mrr
value: 55.853506175197346
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.08196549170419
- type: cos_sim_spearman
value: 31.16661390597077
- type: dot_pearson
value: 29.892258410943466
- type: dot_spearman
value: 30.51328811965085
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.23900000000000002
- type: map_at_10
value: 2.173
- type: map_at_100
value: 14.24
- type: map_at_1000
value: 35.309000000000005
- type: map_at_3
value: 0.7100000000000001
- type: map_at_5
value: 1.163
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 96.0
- type: mrr_at_100
value: 96.0
- type: mrr_at_1000
value: 96.0
- type: mrr_at_3
value: 96.0
- type: mrr_at_5
value: 96.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 85.382
- type: ndcg_at_100
value: 68.03
- type: ndcg_at_1000
value: 61.021
- type: ndcg_at_3
value: 89.765
- type: ndcg_at_5
value: 88.444
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 88.0
- type: precision_at_100
value: 70.02000000000001
- type: precision_at_1000
value: 26.984
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.23900000000000002
- type: recall_at_10
value: 2.313
- type: recall_at_100
value: 17.049
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_3
value: 0.737
- type: recall_at_5
value: 1.221
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.75
- type: map_at_10
value: 11.29
- type: map_at_100
value: 18.032999999999998
- type: map_at_1000
value: 19.746
- type: map_at_3
value: 6.555
- type: map_at_5
value: 8.706999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 50.55
- type: mrr_at_100
value: 51.659
- type: mrr_at_1000
value: 51.659
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.728
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 27.894000000000002
- type: ndcg_at_100
value: 39.769
- type: ndcg_at_1000
value: 51.495999999999995
- type: ndcg_at_3
value: 32.954
- type: ndcg_at_5
value: 31.502999999999997
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.265
- type: precision_at_100
value: 7.898
- type: precision_at_1000
value: 1.58
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 2.75
- type: recall_at_10
value: 16.953
- type: recall_at_100
value: 48.68
- type: recall_at_1000
value: 85.18599999999999
- type: recall_at_3
value: 7.710999999999999
- type: recall_at_5
value: 11.484
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 82.66099999999999
- type: ap
value: 25.555698090238337
- type: f1
value: 66.48402012461622
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.94567062818335
- type: f1
value: 73.28139189595674
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.581627240203474
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.78089050485785
- type: cos_sim_ap
value: 79.64487116574168
- type: cos_sim_f1
value: 72.46563021970964
- type: cos_sim_precision
value: 70.62359128474831
- type: cos_sim_recall
value: 74.40633245382587
- type: dot_accuracy
value: 86.2609524944865
- type: dot_ap
value: 75.513046857613
- type: dot_f1
value: 68.58213616489695
- type: dot_precision
value: 65.12455516014235
- type: dot_recall
value: 72.42744063324538
- type: euclidean_accuracy
value: 87.6080348095607
- type: euclidean_ap
value: 79.00204933649795
- type: euclidean_f1
value: 72.14495342605589
- type: euclidean_precision
value: 69.85421299728193
- type: euclidean_recall
value: 74.5910290237467
- type: manhattan_accuracy
value: 87.59611372712642
- type: manhattan_ap
value: 78.78523756706264
- type: manhattan_f1
value: 71.86499137718648
- type: manhattan_precision
value: 67.39833641404806
- type: manhattan_recall
value: 76.96569920844327
- type: max_accuracy
value: 87.78089050485785
- type: max_ap
value: 79.64487116574168
- type: max_f1
value: 72.46563021970964
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.98719292117825
- type: cos_sim_ap
value: 87.58146137353202
- type: cos_sim_f1
value: 80.28543232369239
- type: cos_sim_precision
value: 79.1735289714029
- type: cos_sim_recall
value: 81.42901139513397
- type: dot_accuracy
value: 88.9199363526992
- type: dot_ap
value: 84.98499998630417
- type: dot_f1
value: 78.21951400757969
- type: dot_precision
value: 75.58523624874336
- type: dot_recall
value: 81.04404065291038
- type: euclidean_accuracy
value: 89.77374160748244
- type: euclidean_ap
value: 87.35151562835209
- type: euclidean_f1
value: 79.92160922940393
- type: euclidean_precision
value: 76.88531587933979
- type: euclidean_recall
value: 83.20757622420696
- type: manhattan_accuracy
value: 89.72717041176699
- type: manhattan_ap
value: 87.34065592142515
- type: manhattan_f1
value: 79.85603419187943
- type: manhattan_precision
value: 77.82243332115455
- type: manhattan_recall
value: 81.99876809362489
- type: max_accuracy
value: 89.98719292117825
- type: max_ap
value: 87.58146137353202
- type: max_f1
value: 80.28543232369239
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 53.45954203592337
- type: cos_sim_spearman
value: 58.42154680418638
- type: euclidean_pearson
value: 56.41543791722753
- type: euclidean_spearman
value: 58.39328016640146
- type: manhattan_pearson
value: 56.318510356833876
- type: manhattan_spearman
value: 58.28423447818184
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 50.78356460675945
- type: cos_sim_spearman
value: 55.6530411663269
- type: euclidean_pearson
value: 56.50763660417816
- type: euclidean_spearman
value: 55.733823335669065
- type: manhattan_pearson
value: 56.45323093512866
- type: manhattan_spearman
value: 55.63248619032702
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.209999999999994
- type: f1
value: 46.08892432018655
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 70.25573992001478
- type: cos_sim_spearman
value: 73.85247134951433
- type: euclidean_pearson
value: 72.60033082168442
- type: euclidean_spearman
value: 73.72445893756499
- type: manhattan_pearson
value: 72.59932284620231
- type: manhattan_spearman
value: 73.68002490614583
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 45.21317724305628
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 42.49825170976724
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.15661686810597
- type: mrr
value: 90.11222222222223
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.1204726064383
- type: mrr
value: 90.20142857142858
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 27.224999999999998
- type: map_at_10
value: 40.169
- type: map_at_100
value: 42.0
- type: map_at_1000
value: 42.109
- type: map_at_3
value: 35.76
- type: map_at_5
value: 38.221
- type: mrr_at_1
value: 40.56
- type: mrr_at_10
value: 49.118
- type: mrr_at_100
value: 50.092999999999996
- type: mrr_at_1000
value: 50.133
- type: mrr_at_3
value: 46.507
- type: mrr_at_5
value: 47.973
- type: ndcg_at_1
value: 40.56
- type: ndcg_at_10
value: 46.972
- type: ndcg_at_100
value: 54.04
- type: ndcg_at_1000
value: 55.862
- type: ndcg_at_3
value: 41.36
- type: ndcg_at_5
value: 43.704
- type: precision_at_1
value: 40.56
- type: precision_at_10
value: 10.302999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.064
- type: precision_at_5
value: 16.764000000000003
- type: recall_at_1
value: 27.224999999999998
- type: recall_at_10
value: 58.05200000000001
- type: recall_at_100
value: 87.092
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 41.373
- type: recall_at_5
value: 48.453
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 77.40228502705953
- type: cos_sim_ap
value: 86.22359172956327
- type: cos_sim_f1
value: 78.96328293736501
- type: cos_sim_precision
value: 73.36945615091311
- type: cos_sim_recall
value: 85.48047696983868
- type: dot_accuracy
value: 75.53818400481059
- type: dot_ap
value: 83.70164011305312
- type: dot_f1
value: 77.67298719348754
- type: dot_precision
value: 67.49482401656314
- type: dot_recall
value: 91.46598082768296
- type: euclidean_accuracy
value: 77.94347564642213
- type: euclidean_ap
value: 86.4652108728609
- type: euclidean_f1
value: 79.15555555555555
- type: euclidean_precision
value: 75.41816641964853
- type: euclidean_recall
value: 83.28267477203647
- type: manhattan_accuracy
value: 77.45039085989175
- type: manhattan_ap
value: 86.09986583900665
- type: manhattan_f1
value: 78.93669264438988
- type: manhattan_precision
value: 72.63261296660117
- type: manhattan_recall
value: 86.43909282207154
- type: max_accuracy
value: 77.94347564642213
- type: max_ap
value: 86.4652108728609
- type: max_f1
value: 79.15555555555555
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 69.336
- type: map_at_10
value: 77.16
- type: map_at_100
value: 77.47500000000001
- type: map_at_1000
value: 77.482
- type: map_at_3
value: 75.42999999999999
- type: map_at_5
value: 76.468
- type: mrr_at_1
value: 69.44200000000001
- type: mrr_at_10
value: 77.132
- type: mrr_at_100
value: 77.43299999999999
- type: mrr_at_1000
value: 77.44
- type: mrr_at_3
value: 75.395
- type: mrr_at_5
value: 76.459
- type: ndcg_at_1
value: 69.547
- type: ndcg_at_10
value: 80.794
- type: ndcg_at_100
value: 82.245
- type: ndcg_at_1000
value: 82.40899999999999
- type: ndcg_at_3
value: 77.303
- type: ndcg_at_5
value: 79.168
- type: precision_at_1
value: 69.547
- type: precision_at_10
value: 9.305
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.749000000000002
- type: precision_at_5
value: 17.576
- type: recall_at_1
value: 69.336
- type: recall_at_10
value: 92.097
- type: recall_at_100
value: 98.736
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 82.64
- type: recall_at_5
value: 87.144
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.817999999999998
- type: map_at_10
value: 82.67
- type: map_at_100
value: 85.304
- type: map_at_1000
value: 85.334
- type: map_at_3
value: 57.336
- type: map_at_5
value: 72.474
- type: mrr_at_1
value: 91.45
- type: mrr_at_10
value: 94.272
- type: mrr_at_100
value: 94.318
- type: mrr_at_1000
value: 94.32000000000001
- type: mrr_at_3
value: 94.0
- type: mrr_at_5
value: 94.17699999999999
- type: ndcg_at_1
value: 91.45
- type: ndcg_at_10
value: 89.404
- type: ndcg_at_100
value: 91.724
- type: ndcg_at_1000
value: 91.973
- type: ndcg_at_3
value: 88.104
- type: ndcg_at_5
value: 87.25699999999999
- type: precision_at_1
value: 91.45
- type: precision_at_10
value: 42.585
- type: precision_at_100
value: 4.838
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 78.8
- type: precision_at_5
value: 66.66
- type: recall_at_1
value: 26.817999999999998
- type: recall_at_10
value: 90.67
- type: recall_at_100
value: 98.36200000000001
- type: recall_at_1000
value: 99.583
- type: recall_at_3
value: 59.614999999999995
- type: recall_at_5
value: 77.05199999999999
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 47.699999999999996
- type: map_at_10
value: 57.589999999999996
- type: map_at_100
value: 58.226
- type: map_at_1000
value: 58.251
- type: map_at_3
value: 55.233
- type: map_at_5
value: 56.633
- type: mrr_at_1
value: 47.699999999999996
- type: mrr_at_10
value: 57.589999999999996
- type: mrr_at_100
value: 58.226
- type: mrr_at_1000
value: 58.251
- type: mrr_at_3
value: 55.233
- type: mrr_at_5
value: 56.633
- type: ndcg_at_1
value: 47.699999999999996
- type: ndcg_at_10
value: 62.505
- type: ndcg_at_100
value: 65.517
- type: ndcg_at_1000
value: 66.19800000000001
- type: ndcg_at_3
value: 57.643
- type: ndcg_at_5
value: 60.181
- type: precision_at_1
value: 47.699999999999996
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.532999999999998
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 47.699999999999996
- type: recall_at_10
value: 78.0
- type: recall_at_100
value: 91.9
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 64.60000000000001
- type: recall_at_5
value: 70.8
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 44.84801846864178
- type: f1
value: 37.47347897956339
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.68244615477374
- type: f1
value: 80.0445640948843
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.57786502217138
- type: cos_sim_spearman
value: 75.39106054489906
- type: euclidean_pearson
value: 73.72082954602402
- type: euclidean_spearman
value: 75.14421475913619
- type: manhattan_pearson
value: 73.62463076633642
- type: manhattan_spearman
value: 75.01301565104112
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 29.143797057999134
- type: mrr
value: 28.08174603174603
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 70.492
- type: map_at_10
value: 79.501
- type: map_at_100
value: 79.728
- type: map_at_1000
value: 79.735
- type: map_at_3
value: 77.77
- type: map_at_5
value: 78.851
- type: mrr_at_1
value: 72.822
- type: mrr_at_10
value: 80.001
- type: mrr_at_100
value: 80.19
- type: mrr_at_1000
value: 80.197
- type: mrr_at_3
value: 78.484
- type: mrr_at_5
value: 79.42099999999999
- type: ndcg_at_1
value: 72.822
- type: ndcg_at_10
value: 83.013
- type: ndcg_at_100
value: 84.013
- type: ndcg_at_1000
value: 84.20400000000001
- type: ndcg_at_3
value: 79.728
- type: ndcg_at_5
value: 81.542
- type: precision_at_1
value: 72.822
- type: precision_at_10
value: 9.917
- type: precision_at_100
value: 1.042
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.847
- type: precision_at_5
value: 18.871
- type: recall_at_1
value: 70.492
- type: recall_at_10
value: 93.325
- type: recall_at_100
value: 97.822
- type: recall_at_1000
value: 99.319
- type: recall_at_3
value: 84.636
- type: recall_at_5
value: 88.93100000000001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.88298587760592
- type: f1
value: 73.89001762017176
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.76328177538669
- type: f1
value: 80.24718532423358
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 49.6
- type: map_at_10
value: 55.620999999999995
- type: map_at_100
value: 56.204
- type: map_at_1000
value: 56.251
- type: map_at_3
value: 54.132999999999996
- type: map_at_5
value: 54.933
- type: mrr_at_1
value: 49.7
- type: mrr_at_10
value: 55.67100000000001
- type: mrr_at_100
value: 56.254000000000005
- type: mrr_at_1000
value: 56.301
- type: mrr_at_3
value: 54.18300000000001
- type: mrr_at_5
value: 54.983000000000004
- type: ndcg_at_1
value: 49.6
- type: ndcg_at_10
value: 58.645
- type: ndcg_at_100
value: 61.789
- type: ndcg_at_1000
value: 63.219
- type: ndcg_at_3
value: 55.567
- type: ndcg_at_5
value: 57.008
- type: precision_at_1
value: 49.6
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 12.64
- type: recall_at_1
value: 49.6
- type: recall_at_10
value: 68.2
- type: recall_at_100
value: 83.6
- type: recall_at_1000
value: 95.3
- type: recall_at_3
value: 59.699999999999996
- type: recall_at_5
value: 63.2
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 74.45666666666666
- type: f1
value: 74.32582402190089
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 80.67135896047645
- type: cos_sim_ap
value: 87.60421240712051
- type: cos_sim_f1
value: 82.1304131408661
- type: cos_sim_precision
value: 77.68361581920904
- type: cos_sim_recall
value: 87.11721224920802
- type: dot_accuracy
value: 79.04710341093666
- type: dot_ap
value: 85.6370059719336
- type: dot_f1
value: 80.763723150358
- type: dot_precision
value: 73.69337979094077
- type: dot_recall
value: 89.33474128827878
- type: euclidean_accuracy
value: 81.05035192203573
- type: euclidean_ap
value: 87.7880240053663
- type: euclidean_f1
value: 82.50244379276637
- type: euclidean_precision
value: 76.7970882620564
- type: euclidean_recall
value: 89.1235480464625
- type: manhattan_accuracy
value: 80.61721710882512
- type: manhattan_ap
value: 87.43568120591175
- type: manhattan_f1
value: 81.89526184538653
- type: manhattan_precision
value: 77.5992438563327
- type: manhattan_recall
value: 86.6948257655755
- type: max_accuracy
value: 81.05035192203573
- type: max_ap
value: 87.7880240053663
- type: max_f1
value: 82.50244379276637
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 93.5
- type: ap
value: 91.31357903446782
- type: f1
value: 93.48088994006616
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 36.93293453538077
- type: cos_sim_spearman
value: 42.45972506308574
- type: euclidean_pearson
value: 42.34945133152159
- type: euclidean_spearman
value: 42.331610303674644
- type: manhattan_pearson
value: 42.31455070249498
- type: manhattan_spearman
value: 42.19887982891834
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 33.683290790043785
- type: cos_sim_spearman
value: 35.149171171202994
- type: euclidean_pearson
value: 32.33806561267862
- type: euclidean_spearman
value: 34.483576387347966
- type: manhattan_pearson
value: 32.47629754599608
- type: manhattan_spearman
value: 34.66434471867615
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.46322760516104
- type: cos_sim_spearman
value: 67.398478319726
- type: euclidean_pearson
value: 64.7223480293625
- type: euclidean_spearman
value: 66.83118568812951
- type: manhattan_pearson
value: 64.88440039828305
- type: manhattan_spearman
value: 66.80429458952257
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 79.08991383232105
- type: cos_sim_spearman
value: 79.39715677296854
- type: euclidean_pearson
value: 78.63201279320496
- type: euclidean_spearman
value: 79.40262660785731
- type: manhattan_pearson
value: 78.98138363146906
- type: manhattan_spearman
value: 79.79968413014194
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.43289278789972
- type: mrr
value: 77.53012460908535
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 27.733999999999998
- type: map_at_10
value: 78.24799999999999
- type: map_at_100
value: 81.765
- type: map_at_1000
value: 81.824
- type: map_at_3
value: 54.92
- type: map_at_5
value: 67.61399999999999
- type: mrr_at_1
value: 90.527
- type: mrr_at_10
value: 92.843
- type: mrr_at_100
value: 92.927
- type: mrr_at_1000
value: 92.93
- type: mrr_at_3
value: 92.45100000000001
- type: mrr_at_5
value: 92.693
- type: ndcg_at_1
value: 90.527
- type: ndcg_at_10
value: 85.466
- type: ndcg_at_100
value: 88.846
- type: ndcg_at_1000
value: 89.415
- type: ndcg_at_3
value: 86.768
- type: ndcg_at_5
value: 85.46000000000001
- type: precision_at_1
value: 90.527
- type: precision_at_10
value: 42.488
- type: precision_at_100
value: 5.024
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.907
- type: precision_at_5
value: 63.727000000000004
- type: recall_at_1
value: 27.733999999999998
- type: recall_at_10
value: 84.346
- type: recall_at_100
value: 95.536
- type: recall_at_1000
value: 98.42999999999999
- type: recall_at_3
value: 56.455
- type: recall_at_5
value: 70.755
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 49.952000000000005
- type: f1
value: 48.264617195258054
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 68.23769904483508
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 62.50294403136556
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 54.0
- type: map_at_10
value: 63.668
- type: map_at_100
value: 64.217
- type: map_at_1000
value: 64.23100000000001
- type: map_at_3
value: 61.7
- type: map_at_5
value: 62.870000000000005
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 63.668
- type: mrr_at_100
value: 64.217
- type: mrr_at_1000
value: 64.23100000000001
- type: mrr_at_3
value: 61.7
- type: mrr_at_5
value: 62.870000000000005
- type: ndcg_at_1
value: 54.0
- type: ndcg_at_10
value: 68.11399999999999
- type: ndcg_at_100
value: 70.723
- type: ndcg_at_1000
value: 71.123
- type: ndcg_at_3
value: 64.074
- type: ndcg_at_5
value: 66.178
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.633000000000003
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 54.0
- type: recall_at_10
value: 82.0
- type: recall_at_100
value: 94.1
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 70.89999999999999
- type: recall_at_5
value: 76.0
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 86.63000000000001
- type: ap
value: 69.99457882599567
- type: f1
value: 85.07735617998541
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 44.594104491193555
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 63.97614314115309
- type: f1
value: 52.15634261679283
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 32.646
- type: map_at_10
value: 47.963
- type: map_at_100
value: 48.789
- type: map_at_1000
value: 48.797000000000004
- type: map_at_3
value: 43.196
- type: map_at_5
value: 46.016
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.126000000000005
- type: mrr_at_100
value: 48.946
- type: mrr_at_1000
value: 48.953
- type: mrr_at_3
value: 43.374
- type: mrr_at_5
value: 46.147
- type: ndcg_at_1
value: 32.646
- type: ndcg_at_10
value: 56.481
- type: ndcg_at_100
value: 59.922
- type: ndcg_at_1000
value: 60.07
- type: ndcg_at_3
value: 46.675
- type: ndcg_at_5
value: 51.76500000000001
- type: precision_at_1
value: 32.646
- type: precision_at_10
value: 8.371
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.919
- type: precision_at_5
value: 13.825999999999999
- type: recall_at_1
value: 32.646
- type: recall_at_10
value: 83.71300000000001
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 56.757000000000005
- type: recall_at_5
value: 69.132
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.56
- type: ap
value: 23.310493680488513
- type: f1
value: 58.85369533105693
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 88.5
- type: cos_sim_ap
value: 72.42140924378361
- type: cos_sim_f1
value: 66.0919540229885
- type: cos_sim_precision
value: 72.78481012658227
- type: cos_sim_recall
value: 60.526315789473685
- type: dot_accuracy
value: 88.5
- type: dot_ap
value: 72.42140924378361
- type: dot_f1
value: 66.0919540229885
- type: dot_precision
value: 72.78481012658227
- type: dot_recall
value: 60.526315789473685
- type: euclidean_accuracy
value: 88.5
- type: euclidean_ap
value: 72.42140924378361
- type: euclidean_f1
value: 66.0919540229885
- type: euclidean_precision
value: 72.78481012658227
- type: euclidean_recall
value: 60.526315789473685
- type: manhattan_accuracy
value: 88.5
- type: manhattan_ap
value: 72.49745515311696
- type: manhattan_f1
value: 66.0968660968661
- type: manhattan_precision
value: 72.04968944099379
- type: manhattan_recall
value: 61.05263157894737
- type: max_accuracy
value: 88.5
- type: max_ap
value: 72.49745515311696
- type: max_f1
value: 66.0968660968661
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 90.32269765590145
- type: cos_sim_spearman
value: 89.73666311491672
- type: euclidean_pearson
value: 88.2933868516544
- type: euclidean_spearman
value: 89.73666311491672
- type: manhattan_pearson
value: 88.33474590219448
- type: manhattan_spearman
value: 89.8548364866583
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 7.632999999999999
- type: map_at_10
value: 16.426
- type: map_at_100
value: 22.651
- type: map_at_1000
value: 24.372
- type: map_at_3
value: 11.706
- type: map_at_5
value: 13.529
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 68.613
- type: mrr_at_100
value: 69.001
- type: mrr_at_1000
value: 69.021
- type: mrr_at_3
value: 67.0
- type: mrr_at_5
value: 67.925
- type: ndcg_at_1
value: 49.875
- type: ndcg_at_10
value: 36.978
- type: ndcg_at_100
value: 40.031
- type: ndcg_at_1000
value: 47.566
- type: ndcg_at_3
value: 41.148
- type: ndcg_at_5
value: 38.702
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 29.7
- type: precision_at_100
value: 9.278
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 44.0
- type: precision_at_5
value: 37.6
- type: recall_at_1
value: 7.632999999999999
- type: recall_at_10
value: 22.040000000000003
- type: recall_at_100
value: 44.024
- type: recall_at_1000
value: 67.848
- type: recall_at_3
value: 13.093
- type: recall_at_5
value: 15.973
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 15.473
- type: map_at_10
value: 24.579
- type: map_at_100
value: 26.387
- type: map_at_1000
value: 26.57
- type: map_at_3
value: 21.278
- type: map_at_5
value: 23.179
- type: mrr_at_1
value: 30.709999999999997
- type: mrr_at_10
value: 38.994
- type: mrr_at_100
value: 39.993
- type: mrr_at_1000
value: 40.044999999999995
- type: mrr_at_3
value: 36.342999999999996
- type: mrr_at_5
value: 37.846999999999994
- type: ndcg_at_1
value: 30.709999999999997
- type: ndcg_at_10
value: 31.608999999999998
- type: ndcg_at_100
value: 38.807
- type: ndcg_at_1000
value: 42.208
- type: ndcg_at_3
value: 28.086
- type: ndcg_at_5
value: 29.323
- type: precision_at_1
value: 30.709999999999997
- type: precision_at_10
value: 8.688
- type: precision_at_100
value: 1.608
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_3
value: 18.724
- type: precision_at_5
value: 13.950999999999999
- type: recall_at_1
value: 15.473
- type: recall_at_10
value: 38.361000000000004
- type: recall_at_100
value: 65.2
- type: recall_at_1000
value: 85.789
- type: recall_at_3
value: 25.401
- type: recall_at_5
value: 30.875999999999998
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 38.096000000000004
- type: map_at_10
value: 51.44499999999999
- type: map_at_100
value: 52.325
- type: map_at_1000
value: 52.397000000000006
- type: map_at_3
value: 48.626999999999995
- type: map_at_5
value: 50.342
- type: mrr_at_1
value: 76.19200000000001
- type: mrr_at_10
value: 81.191
- type: mrr_at_100
value: 81.431
- type: mrr_at_1000
value: 81.443
- type: mrr_at_3
value: 80.30199999999999
- type: mrr_at_5
value: 80.85900000000001
- type: ndcg_at_1
value: 76.19200000000001
- type: ndcg_at_10
value: 60.9
- type: ndcg_at_100
value: 64.14699999999999
- type: ndcg_at_1000
value: 65.647
- type: ndcg_at_3
value: 56.818000000000005
- type: ndcg_at_5
value: 59.019999999999996
- type: precision_at_1
value: 76.19200000000001
- type: precision_at_10
value: 12.203
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 34.616
- type: precision_at_5
value: 22.515
- type: recall_at_1
value: 38.096000000000004
- type: recall_at_10
value: 61.013
- type: recall_at_100
value: 73.90299999999999
- type: recall_at_1000
value: 83.91
- type: recall_at_3
value: 51.92400000000001
- type: recall_at_5
value: 56.286
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 1.548
- type: map_at_10
value: 11.049000000000001
- type: map_at_100
value: 28.874
- type: map_at_1000
value: 34.931
- type: map_at_3
value: 4.162
- type: map_at_5
value: 6.396
- type: mrr_at_1
value: 90.69800000000001
- type: mrr_at_10
value: 92.093
- type: mrr_at_100
value: 92.345
- type: mrr_at_1000
value: 92.345
- type: mrr_at_3
value: 91.86
- type: mrr_at_5
value: 91.86
- type: ndcg_at_1
value: 74.031
- type: ndcg_at_10
value: 63.978
- type: ndcg_at_100
value: 53.101
- type: ndcg_at_1000
value: 60.675999999999995
- type: ndcg_at_3
value: 71.421
- type: ndcg_at_5
value: 68.098
- type: precision_at_1
value: 90.69800000000001
- type: precision_at_10
value: 71.86
- type: precision_at_100
value: 31.395
- type: precision_at_1000
value: 5.981
- type: precision_at_3
value: 84.49600000000001
- type: precision_at_5
value: 79.07
- type: recall_at_1
value: 1.548
- type: recall_at_10
value: 12.149000000000001
- type: recall_at_100
value: 40.794999999999995
- type: recall_at_1000
value: 67.974
- type: recall_at_3
value: 4.244
- type: recall_at_5
value: 6.608
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.55413584398119
- type: f1
value: 69.65610882318181
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.37188971082716
- type: f1
value: 75.64847309941361
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.919
- type: map_at_10
value: 10.834000000000001
- type: map_at_100
value: 13.38
- type: map_at_1000
value: 14.581
- type: map_at_3
value: 8.198
- type: map_at_5
value: 9.428
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 50.083
- type: mrr_at_100
value: 50.559
- type: mrr_at_1000
value: 50.604000000000006
- type: mrr_at_3
value: 47.936
- type: mrr_at_5
value: 49.407000000000004
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 30.098000000000003
- type: ndcg_at_100
value: 27.061
- type: ndcg_at_1000
value: 35.94
- type: ndcg_at_3
value: 35.135
- type: ndcg_at_5
value: 33.335
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 22.259999999999998
- type: precision_at_100
value: 6.712
- type: precision_at_1000
value: 1.9060000000000001
- type: precision_at_3
value: 33.23
- type: precision_at_5
value: 29.04
- type: recall_at_1
value: 4.919
- type: recall_at_10
value: 14.196
- type: recall_at_100
value: 26.948
- type: recall_at_1000
value: 59.211000000000006
- type: recall_at_3
value: 9.44
- type: recall_at_5
value: 11.569
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 25.35
- type: map_at_10
value: 37.884
- type: map_at_100
value: 38.955
- type: map_at_1000
value: 39.007999999999996
- type: map_at_3
value: 34.239999999999995
- type: map_at_5
value: 36.398
- type: mrr_at_1
value: 28.737000000000002
- type: mrr_at_10
value: 39.973
- type: mrr_at_100
value: 40.844
- type: mrr_at_1000
value: 40.885
- type: mrr_at_3
value: 36.901
- type: mrr_at_5
value: 38.721
- type: ndcg_at_1
value: 28.708
- type: ndcg_at_10
value: 44.204
- type: ndcg_at_100
value: 48.978
- type: ndcg_at_1000
value: 50.33
- type: ndcg_at_3
value: 37.36
- type: ndcg_at_5
value: 40.912
- type: precision_at_1
value: 28.708
- type: precision_at_10
value: 7.367
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 17.034
- type: precision_at_5
value: 12.293999999999999
- type: recall_at_1
value: 25.35
- type: recall_at_10
value: 61.411
- type: recall_at_100
value: 82.599
- type: recall_at_1000
value: 92.903
- type: recall_at_3
value: 43.728
- type: recall_at_5
value: 51.854
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.49422763833996
- type: f1
value: 66.73472657783407
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 81.0
- type: cos_sim_ap
value: 91.47194213011349
- type: cos_sim_f1
value: 84.73767885532592
- type: cos_sim_precision
value: 81.49847094801224
- type: cos_sim_recall
value: 88.24503311258279
- type: dot_accuracy
value: 81.0
- type: dot_ap
value: 91.47194213011349
- type: dot_f1
value: 84.73767885532592
- type: dot_precision
value: 81.49847094801224
- type: dot_recall
value: 88.24503311258279
- type: euclidean_accuracy
value: 81.0
- type: euclidean_ap
value: 91.47194213011349
- type: euclidean_f1
value: 84.73767885532592
- type: euclidean_precision
value: 81.49847094801224
- type: euclidean_recall
value: 88.24503311258279
- type: manhattan_accuracy
value: 81.0
- type: manhattan_ap
value: 91.46464475050571
- type: manhattan_f1
value: 84.48687350835321
- type: manhattan_precision
value: 81.31699846860643
- type: manhattan_recall
value: 87.91390728476821
- type: max_accuracy
value: 81.0
- type: max_ap
value: 91.47194213011349
- type: max_f1
value: 84.73767885532592
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.6808905380334
- type: cos_sim_ap
value: 99.27948611836348
- type: cos_sim_f1
value: 96.15975422427034
- type: cos_sim_precision
value: 96.90402476780186
- type: cos_sim_recall
value: 95.42682926829268
- type: dot_accuracy
value: 97.6808905380334
- type: dot_ap
value: 99.2794861183635
- type: dot_f1
value: 96.15975422427034
- type: dot_precision
value: 96.90402476780186
- type: dot_recall
value: 95.42682926829268
- type: euclidean_accuracy
value: 97.6808905380334
- type: euclidean_ap
value: 99.2794861183635
- type: euclidean_f1
value: 96.15975422427034
- type: euclidean_precision
value: 96.90402476780186
- type: euclidean_recall
value: 95.42682926829268
- type: manhattan_accuracy
value: 97.6808905380334
- type: manhattan_ap
value: 99.28715055268721
- type: manhattan_f1
value: 96.14791987673343
- type: manhattan_precision
value: 97.19626168224299
- type: manhattan_recall
value: 95.1219512195122
- type: max_accuracy
value: 97.6808905380334
- type: max_ap
value: 99.28715055268721
- type: max_f1
value: 96.15975422427034
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.16343490304708
- type: f1
value: 83.3442579486744
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.40080971659918
- type: f1
value: 53.13720751142237
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 63.322
- type: map_at_10
value: 76.847
- type: map_at_100
value: 77.616
- type: map_at_1000
value: 77.644
- type: map_at_3
value: 73.624
- type: map_at_5
value: 75.603
- type: mrr_at_1
value: 72.88
- type: mrr_at_10
value: 80.376
- type: mrr_at_100
value: 80.604
- type: mrr_at_1000
value: 80.61
- type: mrr_at_3
value: 78.92
- type: mrr_at_5
value: 79.869
- type: ndcg_at_1
value: 72.89999999999999
- type: ndcg_at_10
value: 81.43
- type: ndcg_at_100
value: 83.394
- type: ndcg_at_1000
value: 83.685
- type: ndcg_at_3
value: 77.62599999999999
- type: ndcg_at_5
value: 79.656
- type: precision_at_1
value: 72.89999999999999
- type: precision_at_10
value: 12.548
- type: precision_at_100
value: 1.4869999999999999
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 34.027
- type: precision_at_5
value: 22.654
- type: recall_at_1
value: 63.322
- type: recall_at_10
value: 90.664
- type: recall_at_100
value: 97.974
- type: recall_at_1000
value: 99.636
- type: recall_at_3
value: 80.067
- type: recall_at_5
value: 85.526
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.95
- type: map_at_10
value: 9.658999999999999
- type: map_at_100
value: 11.384
- type: map_at_1000
value: 11.677
- type: map_at_3
value: 7.055
- type: map_at_5
value: 8.244
- type: mrr_at_1
value: 19.5
- type: mrr_at_10
value: 28.777
- type: mrr_at_100
value: 29.936
- type: mrr_at_1000
value: 30.009999999999998
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.284999999999997
- type: ndcg_at_1
value: 19.5
- type: ndcg_at_10
value: 16.589000000000002
- type: ndcg_at_100
value: 23.879
- type: ndcg_at_1000
value: 29.279
- type: ndcg_at_3
value: 15.719
- type: ndcg_at_5
value: 13.572000000000001
- type: precision_at_1
value: 19.5
- type: precision_at_10
value: 8.62
- type: precision_at_100
value: 1.924
- type: precision_at_1000
value: 0.322
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.78
- type: recall_at_1
value: 3.95
- type: recall_at_10
value: 17.477999999999998
- type: recall_at_100
value: 38.99
- type: recall_at_1000
value: 65.417
- type: recall_at_3
value: 8.883000000000001
- type: recall_at_5
value: 11.933
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 83.48960456583775
- type: cos_sim_ap
value: 76.31522115825375
- type: cos_sim_f1
value: 70.35573122529645
- type: cos_sim_precision
value: 70.9934735315446
- type: cos_sim_recall
value: 69.72934472934473
- type: dot_accuracy
value: 83.48960456583775
- type: dot_ap
value: 76.31522115825373
- type: dot_f1
value: 70.35573122529645
- type: dot_precision
value: 70.9934735315446
- type: dot_recall
value: 69.72934472934473
- type: euclidean_accuracy
value: 83.48960456583775
- type: euclidean_ap
value: 76.31522115825373
- type: euclidean_f1
value: 70.35573122529645
- type: euclidean_precision
value: 70.9934735315446
- type: euclidean_recall
value: 69.72934472934473
- type: manhattan_accuracy
value: 83.46922136159804
- type: manhattan_ap
value: 76.18474601388084
- type: manhattan_f1
value: 70.34779490856937
- type: manhattan_precision
value: 70.83032490974729
- type: manhattan_recall
value: 69.87179487179486
- type: max_accuracy
value: 83.48960456583775
- type: max_ap
value: 76.31522115825375
- type: max_f1
value: 70.35573122529645
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 77.95374883876302
- type: cos_sim_spearman
value: 73.77630219171942
- type: euclidean_pearson
value: 75.81927069594934
- type: euclidean_spearman
value: 73.7763211303831
- type: manhattan_pearson
value: 76.03126859057528
- type: manhattan_spearman
value: 73.96528138013369
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.388282764841826
- type: cos_sim_spearman
value: 40.83477184710897
- type: euclidean_pearson
value: 26.754737044177805
- type: euclidean_spearman
value: 40.83477184710897
- type: manhattan_pearson
value: 26.760453110872458
- type: manhattan_spearman
value: 41.034477441383856
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 49.15
- type: map_at_10
value: 61.690999999999995
- type: map_at_100
value: 62.348000000000006
- type: map_at_1000
value: 62.38
- type: map_at_3
value: 58.824
- type: map_at_5
value: 60.662000000000006
- type: mrr_at_1
value: 51.333
- type: mrr_at_10
value: 62.731
- type: mrr_at_100
value: 63.245
- type: mrr_at_1000
value: 63.275000000000006
- type: mrr_at_3
value: 60.667
- type: mrr_at_5
value: 61.93300000000001
- type: ndcg_at_1
value: 51.333
- type: ndcg_at_10
value: 67.168
- type: ndcg_at_100
value: 69.833
- type: ndcg_at_1000
value: 70.56700000000001
- type: ndcg_at_3
value: 62.40599999999999
- type: ndcg_at_5
value: 65.029
- type: precision_at_1
value: 51.333
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.333
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 49.15
- type: recall_at_10
value: 82.533
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 69.917
- type: recall_at_5
value: 76.356
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.261
- type: map_at_10
value: 2.1260000000000003
- type: map_at_100
value: 12.171999999999999
- type: map_at_1000
value: 26.884999999999998
- type: map_at_3
value: 0.695
- type: map_at_5
value: 1.134
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 96.952
- type: mrr_at_100
value: 96.952
- type: mrr_at_1000
value: 96.952
- type: mrr_at_3
value: 96.667
- type: mrr_at_5
value: 96.667
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_10
value: 81.193
- type: ndcg_at_100
value: 61.129
- type: ndcg_at_1000
value: 51.157
- type: ndcg_at_3
value: 85.693
- type: ndcg_at_5
value: 84.129
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 85.39999999999999
- type: precision_at_100
value: 62.03999999999999
- type: precision_at_1000
value: 22.224
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 88.0
- type: recall_at_1
value: 0.261
- type: recall_at_10
value: 2.262
- type: recall_at_100
value: 14.981
- type: recall_at_1000
value: 46.837
- type: recall_at_3
value: 0.703
- type: recall_at_5
value: 1.172
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 70.55290063940157
- type: v_measure
value: 55.41500719337263
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.48697375332002
- type: mrr
value: 75.01836585523822
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.454
- type: map_at_10
value: 51.605000000000004
- type: map_at_100
value: 52.653000000000006
- type: map_at_1000
value: 52.697
- type: map_at_3
value: 48.304
- type: map_at_5
value: 50.073
- type: mrr_at_1
value: 43.307
- type: mrr_at_10
value: 54.400000000000006
- type: mrr_at_100
value: 55.147999999999996
- type: mrr_at_1000
value: 55.174
- type: mrr_at_3
value: 51.77
- type: mrr_at_5
value: 53.166999999999994
- type: ndcg_at_1
value: 43.307
- type: ndcg_at_10
value: 57.891000000000005
- type: ndcg_at_100
value: 62.161
- type: ndcg_at_1000
value: 63.083
- type: ndcg_at_3
value: 51.851
- type: ndcg_at_5
value: 54.605000000000004
- type: precision_at_1
value: 43.307
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 22.798
- type: precision_at_5
value: 15.492
- type: recall_at_1
value: 38.454
- type: recall_at_10
value: 74.166
- type: recall_at_100
value: 92.43599999999999
- type: recall_at_1000
value: 99.071
- type: recall_at_3
value: 58.087
- type: recall_at_5
value: 64.568
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.474
- type: f1
value: 50.38275392350236
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 2.252
- type: map_at_10
value: 4.661
- type: map_at_100
value: 5.271
- type: map_at_1000
value: 5.3629999999999995
- type: map_at_3
value: 3.604
- type: map_at_5
value: 4.3020000000000005
- type: mrr_at_1
value: 2.252
- type: mrr_at_10
value: 4.661
- type: mrr_at_100
value: 5.271
- type: mrr_at_1000
value: 5.3629999999999995
- type: mrr_at_3
value: 3.604
- type: mrr_at_5
value: 4.3020000000000005
- type: ndcg_at_1
value: 2.252
- type: ndcg_at_10
value: 6.3020000000000005
- type: ndcg_at_100
value: 10.342
- type: ndcg_at_1000
value: 13.475999999999999
- type: ndcg_at_3
value: 4.0649999999999995
- type: ndcg_at_5
value: 5.344
- type: precision_at_1
value: 2.252
- type: precision_at_10
value: 1.171
- type: precision_at_100
value: 0.333
- type: precision_at_1000
value: 0.059000000000000004
- type: precision_at_3
value: 1.802
- type: precision_at_5
value: 1.712
- type: recall_at_1
value: 2.252
- type: recall_at_10
value: 11.712
- type: recall_at_100
value: 33.333
- type: recall_at_1000
value: 59.458999999999996
- type: recall_at_3
value: 5.405
- type: recall_at_5
value: 8.559
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 28.301882091023288
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 45.26992995191701
- type: v_measure
value: 42.773174876871145
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.47635452552458
- type: f1
value: 93.19922617577213
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.2317569683683
- type: f1
value: 56.18060418621901
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 85.18957345971565
- type: f1
value: 80.829981537394
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 71.04138999801822
- type: v_measure
value: 71.7056263158008
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.65097511768661
- type: f1
value: 73.82441070598712
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.09885675857431
- type: f1
value: 78.28407777434224
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 25.307000000000002
- type: map_at_10
value: 36.723
- type: map_at_100
value: 37.713
- type: map_at_1000
value: 37.769000000000005
- type: map_at_3
value: 33.77
- type: map_at_5
value: 35.463
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 36.723
- type: mrr_at_100
value: 37.713
- type: mrr_at_1000
value: 37.769000000000005
- type: mrr_at_3
value: 33.77
- type: mrr_at_5
value: 35.463
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 42.559999999999995
- type: ndcg_at_100
value: 47.457
- type: ndcg_at_1000
value: 49.162
- type: ndcg_at_3
value: 36.461
- type: ndcg_at_5
value: 39.504
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 6.106
- type: precision_at_100
value: 0.8420000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 14.741999999999999
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 25.307000000000002
- type: recall_at_10
value: 61.056999999999995
- type: recall_at_100
value: 84.152
- type: recall_at_1000
value: 98.03399999999999
- type: recall_at_3
value: 44.226
- type: recall_at_5
value: 51.597
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 70.8
- type: cos_sim_ap
value: 73.7671529695957
- type: cos_sim_f1
value: 68.80964339527875
- type: cos_sim_precision
value: 62.95955882352941
- type: cos_sim_recall
value: 75.85825027685493
- type: dot_accuracy
value: 70.8
- type: dot_ap
value: 73.78345265366947
- type: dot_f1
value: 68.80964339527875
- type: dot_precision
value: 62.95955882352941
- type: dot_recall
value: 75.85825027685493
- type: euclidean_accuracy
value: 70.8
- type: euclidean_ap
value: 73.7671529695957
- type: euclidean_f1
value: 68.80964339527875
- type: euclidean_precision
value: 62.95955882352941
- type: euclidean_recall
value: 75.85825027685493
- type: manhattan_accuracy
value: 70.75
- type: manhattan_ap
value: 73.78996383615953
- type: manhattan_f1
value: 68.79432624113475
- type: manhattan_precision
value: 63.39869281045751
- type: manhattan_recall
value: 75.1937984496124
- type: max_accuracy
value: 70.8
- type: max_ap
value: 73.78996383615953
- type: max_f1
value: 68.80964339527875
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 84.03253762760392
- type: cos_sim_spearman
value: 79.68280105762004
- type: euclidean_pearson
value: 80.98265050044444
- type: euclidean_spearman
value: 79.68233242682867
- type: manhattan_pearson
value: 80.9678911810704
- type: manhattan_spearman
value: 79.70264097683109
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 80.56896987572884
- type: cos_sim_spearman
value: 81.84352499523287
- type: euclidean_pearson
value: 80.40831759421305
- type: euclidean_spearman
value: 81.84352499523287
- type: manhattan_pearson
value: 80.74333857561238
- type: manhattan_spearman
value: 82.41503246733892
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 82.71826762276979
- type: cos_sim_spearman
value: 82.25433354916042
- type: euclidean_pearson
value: 81.87115571724316
- type: euclidean_spearman
value: 82.25322342890107
- type: manhattan_pearson
value: 82.11174867527224
- type: manhattan_spearman
value: 82.55905365203084
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 30.659441623392887
- type: cos_sim_spearman
value: 30.501134097353315
- type: dot_pearson
value: 30.659444768851056
- type: dot_spearman
value: 30.501134097353315
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 94.03333333333333
- type: mrr
value: 94.03333333333333
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 79.0
- type: map_at_10
value: 87.61
- type: map_at_100
value: 87.655
- type: map_at_1000
value: 87.655
- type: map_at_3
value: 87.167
- type: map_at_5
value: 87.36699999999999
- type: mrr_at_1
value: 79.0
- type: mrr_at_10
value: 87.61
- type: mrr_at_100
value: 87.655
- type: mrr_at_1000
value: 87.655
- type: mrr_at_3
value: 87.167
- type: mrr_at_5
value: 87.36699999999999
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 90.473
- type: ndcg_at_100
value: 90.694
- type: ndcg_at_1000
value: 90.694
- type: ndcg_at_3
value: 89.464
- type: ndcg_at_5
value: 89.851
- type: precision_at_1
value: 79.0
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 32.0
- type: precision_at_5
value: 19.400000000000002
- type: recall_at_1
value: 79.0
- type: recall_at_10
value: 99.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 96.0
- type: recall_at_5
value: 97.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 39.395
- type: map_at_10
value: 59.123999999999995
- type: map_at_100
value: 60.704
- type: map_at_1000
value: 60.760000000000005
- type: map_at_3
value: 53.187
- type: map_at_5
value: 56.863
- type: mrr_at_1
value: 62.083
- type: mrr_at_10
value: 68.87299999999999
- type: mrr_at_100
value: 69.46900000000001
- type: mrr_at_1000
value: 69.48299999999999
- type: mrr_at_3
value: 66.8
- type: mrr_at_5
value: 67.928
- type: ndcg_at_1
value: 62.083
- type: ndcg_at_10
value: 65.583
- type: ndcg_at_100
value: 70.918
- type: ndcg_at_1000
value: 71.72800000000001
- type: ndcg_at_3
value: 60.428000000000004
- type: ndcg_at_5
value: 61.853
- type: precision_at_1
value: 62.083
- type: precision_at_10
value: 15.033
- type: precision_at_100
value: 1.9529999999999998
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 36.315
- type: precision_at_5
value: 25.955000000000002
- type: recall_at_1
value: 39.395
- type: recall_at_10
value: 74.332
- type: recall_at_100
value: 94.729
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 57.679
- type: recall_at_5
value: 65.036
---
# bytejack007/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-1.5B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo bytejack007/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo bytejack007/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo bytejack007/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo bytejack007/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_k_m.gguf -c 2048
```
|
swarup3204/gemma-3-1b-pt-peft-dare | swarup3204 | 2025-04-02T05:16:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"base_model:google/gemma-3-1b-pt",
"base_model:merge:google/gemma-3-1b-pt",
"base_model:swarup3204/gemma-3-1b-pt-peft",
"base_model:merge:swarup3204/gemma-3-1b-pt-peft",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-01T07:01:16Z | ---
base_model:
- swarup3204/gemma-3-1b-pt-peft
- google/gemma-3-1b-pt
library_name: transformers
tags:
- mergekit
- merge
---
# model_output
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt) as a base.
### Models Merged
The following models were included in the merge:
* [swarup3204/gemma-3-1b-pt-peft](https://huggingface.co/swarup3204/gemma-3-1b-pt-peft)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: swarup3204/gemma-3-1b-pt-peft
parameters:
weight: 1.0
density: 0.4
merge_method: dare_ties
base_model: google/gemma-3-1b-pt
dtype: bfloat16
parameters:
normalize: false
int8_mask: true
```
|
amirreza87/Ravesh_MLT-1 | amirreza87 | 2025-04-02T05:14:57Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-02T05:14:57Z | ---
license: apache-2.0
---
|
lama9876/Xception-Model | lama9876 | 2025-04-02T05:13:43Z | 0 | 0 | null | [
"Xception",
"license:mit",
"region:us"
] | null | 2025-04-02T00:38:42Z | ---
license: mit
---
# Xception Model for Emotion Recognition
This model is based on the **Xception** architecture and trained on the **FER2013** dataset and **CK+** for **emotion recognition**.
## Model Details
- **Architecture**: Xception
- **Input Shape**: (48, 48, 1)
- **Output Shape**: 7 classes (emotion categories)
- **Pretrained on**: FER2013 dataset (Augmented)
- **File Type**: `.h5`
## How to Use the Model
### Using the Hugging Face Inference API:
You can use this model directly through the **Hugging Face Inference API**. Here's an example of how to use it:
```python
from transformers import pipeline
# Replace with the model's name on Hugging Face
model_name = "lama9876/Xception-Model"
# Load the model using the pipeline
model = pipeline('image-classification', model=model_name)
# Make a prediction (replace with the path to your image)
result = model("path_to_image.jpg")
print(result) |
Hiridharan10/llama-3-3b-coder-V2-gguf | Hiridharan10 | 2025-04-02T05:12:41Z | 5 | 0 | adapter-transformers | [
"adapter-transformers",
"gguf",
"code",
"en",
"dataset:codeparrot/apps",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-11T14:22:45Z | ---
datasets:
- codeparrot/apps
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
library_name: adapter-transformers
tags:
- code
--- |
PrunaAI/NexaAIDev-Octopus-v2-bnb-4bit-smashed | PrunaAI | 2025-04-02T05:10:51Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"pruna-ai",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-07-16T13:03:32Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/NexaAIDev-Octopus-v2-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
wqerrewetw/Qwen-2.5-7B-1M-RRP-v1-lora-F16-GGUF | wqerrewetw | 2025-04-02T05:09:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"dataset:AMead10/Sky-T1_data_17k_sharegpt",
"dataset:Chaser-cz/sonnet35-charcard-roleplay-sharegpt",
"base_model:bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora",
"base_model:quantized:bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-02T05:09:21Z | ---
base_model: bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
datasets:
- AMead10/Sky-T1_data_17k_sharegpt
- Chaser-cz/sonnet35-charcard-roleplay-sharegpt
---
# wqerrewetw/Qwen-2.5-7B-1M-RRP-v1-lora-F16-GGUF
This LoRA adapter was converted to GGUF format from [`bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora`](https://huggingface.co/bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora Qwen-2.5-7B-1M-RRP-v1-lora-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Qwen-2.5-7B-1M-RRP-v1-lora-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
Finanahnahnah/layered-paper-art | Finanahnahnah | 2025-04-02T05:08:47Z | 0 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-02T03:58:24Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
layered paper art, the colorful collage captures a Summer European garden near the ocean with cute kitties in the front
output:
url: images/ComfyUI_00231_.png
- text: >-
layered paper art, the colorful collage captures a rocket near the coast
output:
url: images/ComfyUI_00220_.png
- text: >-
layered paper art, the colorful collage captures The Great Wall of China
output:
url: images/ComfyUI_00186_.png
- text: >-
layered paper art, the colorful collage captures a therapist's room
output:
url: images/ComfyUI_00237_.png
- text: >-
layered paper art, the colorful collage captures a cute raccoon searching for food in a kitchen
output:
url: images/ComfyUI_00247_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: layered paper art, the colorful collage captures...
license: other
license_name: flux.1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Street Art
<Gallery />
## Model description
A layered paper art style.
I trained this LoRA using the AI toolkit(https://github.com/ostris/ai-toolkit). The model achieved promising results by the 2200th training step.
## Trigger words
You should use `layered_paper_art` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Finanahnahnah/layered_paper_art/tree/main) them in the Files & versions tab. |
agilan1102/esysflow_llm | agilan1102 | 2025-04-02T05:08:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | 2025-04-02T05:07:51Z | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
mradermacher/Hermes-3-Remix-L3.2-3b-GGUF | mradermacher | 2025-04-02T05:08:16Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/Hermes-3-Remix-L3.2-3b",
"base_model:quantized:mergekit-community/Hermes-3-Remix-L3.2-3b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T04:41:14Z | ---
base_model: mergekit-community/Hermes-3-Remix-L3.2-3b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/Hermes-3-Remix-L3.2-3b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Remix-L3.2-3b-GGUF/resolve/main/Hermes-3-Remix-L3.2-3b.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Membersuger/miners_cp2_w22 | Membersuger | 2025-04-02T05:08:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T04:50:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hasnonname/MS3.1-24B-Mlen-v1-Q6_K-GGUF | Hasnonname | 2025-04-02T05:05:03Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:trashpanda-org/MS3.1-24B-Mlen-v1",
"base_model:quantized:trashpanda-org/MS3.1-24B-Mlen-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-02T05:03:34Z | ---
base_model: trashpanda-org/MS3.1-24B-Mlen-v1
tags:
- llama-cpp
- gguf-my-repo
---
# Hasnonname/MS3.1-24B-Mlen-v1-Q6_K-GGUF
This model was converted to GGUF format from [`trashpanda-org/MS3.1-24B-Mlen-v1`](https://huggingface.co/trashpanda-org/MS3.1-24B-Mlen-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/trashpanda-org/MS3.1-24B-Mlen-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Hasnonname/MS3.1-24B-Mlen-v1-Q6_K-GGUF --hf-file ms3.1-24b-mlen-v1-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Hasnonname/MS3.1-24B-Mlen-v1-Q6_K-GGUF --hf-file ms3.1-24b-mlen-v1-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Hasnonname/MS3.1-24B-Mlen-v1-Q6_K-GGUF --hf-file ms3.1-24b-mlen-v1-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Hasnonname/MS3.1-24B-Mlen-v1-Q6_K-GGUF --hf-file ms3.1-24b-mlen-v1-q6_k.gguf -c 2048
```
|
AXERA-TECH/Qwen2.5-1.5B-Instruct-CTX-Int8 | AXERA-TECH | 2025-04-02T05:03:59Z | 0 | 0 | transformers | [
"transformers",
"Context",
"Qwen2.5-1.5B",
"text-generation",
"zh",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-01T10:17:13Z | ---
license: mit
language:
- zh
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- Context
- Qwen2.5-1.5B
---
# Qwen2.5-1.5B-Instruct-CTX-Int8
This version of Qwen2.5-1.5B-Instruct-CTX-Int8 has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 4.0(Not released yet)
## Feature
- Support for longer contexts, in this sample it's 2.5k
- Support context dialogue
- System prompt kvcache is supported
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo : https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct-GPTQ-Int4
[Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
[AXera NPU AXEngine LLM Runtime](https://github.com/ZHEQIUSHUI/ax-llm/tree/prefill_kvcaches_context)
[AXera NPU AXCL LLM Runtime](https://github.com/ZHEQIUSHUI/ax-llm/tree/axcl-context-kvcache)
## Support Platform
- AX650
- AX650N DEMO Board
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- *TBD*
|Chips|w8a16|w4a16| DDR | Flash |
|--|--|--|--|--|
|AX650| 11 tokens/sec| *TBD* | 2.3GB | 2.3GB |
## How to use
Download all files from this repository to the device
```
root@ax650:/mnt/qtang/llm-test/Qwen2.5-1.5B-Instruct-CTX-Int8# tree -L 1
.
├── kvcache
├── main
├── main_axcl_aarch64
├── main_axcl_x86
├── post_config.json
├── qwen2.5-1.5b-ctx-ax650
├── qwen2.5_tokenizer
├── qwen2.5_tokenizer_uid.py
├── run_qwen2.5_1.5b_ctx_ax650.sh
├── run_qwen2.5_1.5b_ctx_axcl_aarch64.sh
└── run_qwen2.5_1.5b_ctx_axcl_x86.sh
```
#### Start the Tokenizer service
```
root@ax650:/mnt/qtang/llm-test/Qwen2.5-1.5B-Instruct-CTX-Int8# python qwen2.5_tokenizer_uid.py
Server running at http://0.0.0.0:12345
```
#### System prompt cache
- The System prompt can be preset through the configuration file from `--system_prompt`
- The System prompt can be cached in the form of kv cache to a specified folder for quick loading at the next run time from `--kvcache_path`
- This folder needs to be created manually before running, for example `mkdir kvcache`
```
(base) axera@raspberrypi:~/samples/qwen2.5-1.5b-ctx $ cat run_qwen2.5_1.5b_ctx_axcl_aarch64.sh
./main_axcl_aarch64 \
--system_prompt "你的名字叫小智(allen),你是一个人畜无害的AI助手。深圳市今天(4月1日)阴天,愚人节,气温在14°C至19°C之间,微风。" \
--kvcache_path "./kvcache" \
--template_filename_axmodel "qwen2.5-1.5b-ctx-ax650/qwen2_p128_l%d_together.axmodel" \
--axmodel_num 28 \
--tokenizer_type 2 \
--url_tokenizer_model "http://127.0.0.1:12345" \
--filename_post_axmodel "qwen2.5-1.5b-ctx-ax650/qwen2_post.axmodel" \
--filename_tokens_embed "qwen2.5-1.5b-ctx-ax650/model.embed_tokens.weight.bfloat16.bin" \
--tokens_embed_num 151936 \
--tokens_embed_size 1536 \
--use_mmap_load_embed 1 \
--live_print 1 \
--devices 0
```
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board
Open another terminal and run `run_qwen2.5_1.5b_gptq_int4_ax650.sh`
```
root@ax650:/mnt/qtang/llm-test/Qwen2.5-1.5B-Instruct-CTX-Int8# mkdir -p kvcache
root@ax650:/mnt/qtang/llm-test/Qwen2.5-1.5B-Instruct-CTX-Int8# ./run_qwen2.5_1.5b_ctx_ax650.sh
[I][ Init][ 107]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
bos_id: -1, eos_id: 151645
3% | ██ | 1 / 31 [0.21s<6.39s, 4.85 count/s] tokenizer init ok
[I][ Init][ 26]: LLaMaEmbedSelector use mmap
100% | ████████████████████████████████ | 31 / 31 [5.04s<5.04s, 6.15 count/s] init post axmodel ok,remain_cmm(9656 MB)
[I][ Init][ 185]: max_token_len : 2559
[I][ Init][ 190]: kv_cache_size : 256, kv_cache_num: 2559
[I][ Init][ 198]: prefill_token_num : 128
[I][ Init][ 202]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 202]: grp: 2, prefill_max_token_num : 512
[I][ Init][ 202]: grp: 3, prefill_max_token_num : 1024
[I][ Init][ 202]: grp: 4, prefill_max_token_num : 1536
[I][ Init][ 202]: grp: 5, prefill_max_token_num : 2048
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": true,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 10,
"top_p": 0.8
}
[I][ Init][ 213]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
[E][ load_kvcache][ 101]: k_cache ./kvcache/k_cache_0.bin or v_cache ./kvcache/v_cache_0.bin not exist
[W][ main][ 217]: load kvcache from path: ./kvcache failed,generate kvcache
100% | ████████████████████████████████ | 53 / 53 [4.12s<4.12s, 12.85 token/s]
[I][ GetKVCache][ 325]: precompute_len:53
[I][ main][ 224]: generate kvcache to path: ./kvcache
[I][ main][ 226]: precompute_len: 53
[I][ main][ 227]: system_prompt: 你的名字叫小智(allen),你是一个人畜无害的AI助手。深圳市今天(4月1日)阴天,愚人节,气温在14°C至19°C之间,微风。
prompt >> who are you?
[I][ SetKVCache][ 354]: prefill_grpid:2 kv_cache_num:512 precompute_len:53 input_num_token:12
[I][ Run][ 527]: input_embed_num(12)
[I][ Run][ 642]: ttft: 537.06 ms
我是Allen,一个能够回答问题、提供信息和执行任务的虚拟助手。我可以帮助你解决各种问题、做计划、玩游戏、甚至是进行一些娱乐活动。请问有什么我能帮助你的吗?
[N][ Run][ 756]: hit eos,avg 11.09 token/s
[I][ GetKVCache][ 325]: precompute_len:108
prompt >> 今天是几号,天气怎么样
[I][ SetKVCache][ 354]: prefill_grpid:2 kv_cache_num:512 precompute_len:108 input_num_token:15
[I][ Run][ 527]: input_embed_num(15)
[I][ Run][ 642]: ttft: 536.81 ms
今天是4月1日,愚人节。根据您所描述的深圳天气情况,气温在14°C至19°C之间,气温较低,建议穿着适当。希望您今天愉快!
[N][ Run][ 756]: hit eos,avg 11.17 token/s
[I][ GetKVCache][ 325]: precompute_len:166
```
#### Inference with M.2 Accelerator card
[What is M.2 Accelerator card?](https://axcl-pi5-examples-cn.readthedocs.io/zh-cn/latest/index.html), Show this DEMO based on Raspberry PI 5.
```
(base) axera@raspberrypi:~/samples/Qwen2.5-1.5B-Instruct-CTX-Int8 $ mkdir -p kvcache
(base) axera@raspberrypi:~/samples/Qwen2.5-1.5B-Instruct-CTX-Int8 $ ./run_qwen2.5_1.5b_ctx_axcl_aarch64.sh
[I][ Init][ 134]: LLM init start
[I][ Init][ 41]: connect http://127.0.0.1:12345 ok
bos_id: -1, eos_id: 151645
3% | ██ | 1 / 31 [0.46s<14.11s, 2.20 count/s] tokenizer init ok
[I][ Init][ 45]: LLaMaEmbedSelector use mmap
6% | ███ | 2 / 31 [0.46s<7.05s, 4.40 count/s] embed_selector init ok
[I][ run][ 30]: AXCLWorker start with devid 0
100% | ████████████████████████████████ | 31 / 31 [29.18s<29.18s, 1.06 count/s] init post axmodel ok,remain_cmm(-1 MB)m(-1 MB)
[I][ Init][ 235]: max_token_len : 2559
[I][ Init][ 238]: kv_cache_size : 256, kv_cache_num: 2559
[I][ Init][ 246]: prefill_token_num : 128
[I][ Init][ 250]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 250]: grp: 2, prefill_max_token_num : 512
[I][ Init][ 250]: grp: 3, prefill_max_token_num : 1024
[I][ Init][ 250]: grp: 4, prefill_max_token_num : 1536
[I][ Init][ 250]: grp: 5, prefill_max_token_num : 2048
________________________
| ID| remain cmm(MB)|
========================
| 0| -1|
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": true,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 10,
"top_p": 0.8
}
[I][ Init][ 275]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
[E][ load_kvcache][ 100]: k_cache ./kvcache/k_cache_0.bin or v_cache ./kvcache/v_cache_0.bin not exist
[W][ main][ 223]: load kvcache from path: ./kvcache failed,generate kvcache
100% | ████████████████████████████████ | 53 / 53 [5.06s<5.06s, 10.47 token/s]
[I][ GetKVCache][ 419]: precompute_len:53
[I][ main][ 230]: generate kvcache to path: ./kvcache
[I][ main][ 232]: precompute_len: 53
[I][ main][ 233]: system_prompt: 你的名字叫小智(allen),你是一个人畜无害的AI助手。深圳市今天(4月1日)阴天,愚人节,气温在14°C至19°C之间,微风。
prompt >> 你是谁
[I][ SetKVCache][ 448]: prefill_grpid:2 kv_cache_num:512 precompute_len:53 input_num_token:10
[I][ Run][ 722]: input token num : 10
[I][ Run][ 823]: ttft: 548.23 ms
我是深圳市气象局发布的天气预报,我叫小智,是为了解答大家关于天气的问题而设计的。如果你对天气有疑问,欢迎随时询问!
[N][ Run][ 975]: hit eos,avg 9.04 token/s
[I][ GetKVCache][ 419]: precompute_len:98
prompt >> 你能干什么
[I][ SetKVCache][ 448]: prefill_grpid:2 kv_cache_num:512 precompute_len:98 input_num_token:10
[I][ Run][ 722]: input token num : 10
[I][ Run][ 823]: ttft: 548.07 ms
我能回答关于天气、生活、科技、文化、娱乐、历史等方面的很多问题。如果你有任何想知道的内容,都可以问我哦!
[N][ Run][ 975]: hit eos,avg 9.03 token/s
[I][ GetKVCache][ 419]: precompute_len:135
prompt >> q
[I][ run][ 80]: AXCLWorker exit with devid 0
>> q
(base) axera@raspberrypi:~ $ axcl-smi
+------------------------------------------------------------------------------------------------+
| AXCL-SMI V2.25.0_20250117163029 Driver V2.25.0_20250117163029 |
+-----------------------------------------+--------------+---------------------------------------+
| Card Name Firmware | Bus-Id | Memory-Usage |
| Fan Temp Pwr:Usage/Cap | CPU NPU | CMM-Usage |
|=========================================+==============+=======================================|
| 0 AX650N V2.25.0 | 0000:01:00.0 | 188 MiB / 945 MiB |
| -- 37C -- / -- | 1% 0% | 2335 MiB / 7040 MiB |
+-----------------------------------------+--------------+---------------------------------------+
+------------------------------------------------------------------------------------------------+
| Processes: |
| Card PID Process Name NPU Memory Usage |
|================================================================================================|
| 0 147835 /home/axera/samples/qwen2.5-1.5b-ctx/main_axcl_aarch64 1990172 KiB |
+------------------------------------------------------------------------------------------------+
(base) axera@raspberrypi:~ $
```
|
xw17/Phi-3-mini-4k-instruct_finetuned_4_def_lora | xw17 | 2025-04-02T05:03:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-31T01:21:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jichuanh/Reinforce-CartPole-v1 | jichuanh | 2025-04-02T05:01:33Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-01T22:30:57Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 494.90 +/- 15.30
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
lesso16/55ef5551-9954-479e-ad98-f737ba8db0af | lesso16 | 2025-04-02T05:00:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2025-04-02T02:44:04Z | ---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 55ef5551-9954-479e-ad98-f737ba8db0af
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: huggyllama/llama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8c386572c219eacb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8c386572c219eacb_train_data.json
type:
field_instruction: prompt
field_output: gold_standard_solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso16/55ef5551-9954-479e-ad98-f737ba8db0af
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000216
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/8c386572c219eacb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 160
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 93634e00-a69c-4426-ac4d-51020df490c5
wandb_project: 16a
wandb_run: your_name
wandb_runid: 93634e00-a69c-4426-ac4d-51020df490c5
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 55ef5551-9954-479e-ad98-f737ba8db0af
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000216
- train_batch_size: 4
- eval_batch_size: 4
- seed: 160
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 4.6787 |
| 1.7445 | 0.2825 | 500 | 1.7431 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jon-fernandes/whisper-small-25steps | jon-fernandes | 2025-04-02T05:00:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-31T12:24:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits