modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
habib-z/token_test | habib-z | 2024-06-30T22:47:53Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T22:47:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DewEfresh/neo_7b-V-merge | DewEfresh | 2024-06-30T22:58:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:DewEfresh/neo_7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T22:55:56Z | ---
base_model:
- DewEfresh/neo_7b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [DewEfresh/neo_7b](https://huggingface.co/DewEfresh/neo_7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: DewEfresh/neo_7b
- model: DewEfresh/neo_7b
merge_method: slerp
base_model: DewEfresh/neo_7b
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
OnFinanceAI/setup__llama_ft | OnFinanceAI | 2024-06-30T23:02:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T22:59:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
valerielucro/mistral_gsm8k_sample_sft_and_dpo_4 | valerielucro | 2024-06-30T23:01:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T23:01:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NickyNicky/bge-base-financial-matryoshka_test_3 | NickyNicky | 2024-06-30T23:04:45Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2024-06-30T23:04:13Z | ---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Teams across Delta have worked together to make an impact through
enhanced landing procedures, optimizations to flight routing and speed, and weight
reduction initiatives, saving over 20 million gallons of jet fuel in 2022 and
2023.
sentences:
- What was the percentage increase in Services net sales from 2022 to 2023?
- How much jet fuel did Delta Air Lines save between 2022 and 2023 through optimizations
in aircraft operations?
- How did Ford Pro's EBIT in 2023 compare to the previous year, and what contributed
to this change?
- source_sentence: On February 14, 2022, the State of Texas filed a lawsuit against
us in Texas state court (Texas v. Meta Platforms, Inc.) alleging that "tag suggestions"
and other uses of facial recognition technology violated the Texas Capture or
Use of Biometric Identifiers Act and the Texas Deceptive Trade Practices-Consumer
Protection Act, and seeking statutory damages and injunctive relief.
sentences:
- What did the auditor’s report dated February 9, 2024, state about the effectiveness
of Enphase Energy’s internal control over financial reporting as of December 31,
2023?
- What legal action did the State of Texas initiate against Meta Platforms, Inc.
on February 14, 2022?
- What caused the pretax loss in the Corporate & Other segment to increase in 2023
compared to 2022?
- source_sentence: Our two operating segments are "Compute & Networking" and "Graphics."
Refer to Note 17 of the Notes to the Consolidated Financial Statements in Part
IV, Item 15 of this Annual Report on Form 10-K for additional information.
sentences:
- What are the two operating segments of NVIDIA as mentioned in the text?
- How much did the gross margin increase in 2023 compared to 2022?
- What is the total assets and shareholders' equity of Chubb Limited as of December
31, 2023?
- source_sentence: The increase in marketing and sales expenses in fiscal year 2023
was mainly due to higher advertising and promotional spending related to Apex
Legends Mobile and the FIFA franchise.
sentences:
- What are included in Part IV, Item 15(a)(1) of the Annual Report on Form 10-K?
- What was the net income reported for the fiscal year ending in August 2023?
- What was the primary cause of the increase in marketing and sales expenses in
fiscal year 2023?
- source_sentence: 'Information on legal proceedings is included in Contact Email PRIOR
HISTORY: None PLACEHOLDER FOR ARBITRATION.'
sentences:
- Where can information about legal proceedings be found in the financial statements?
- What remaining authorization amount was available for share repurchases as of
January 28, 2023?
- What is the total amount authorized for the repurchase of common stock up to December
2023?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.71
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8428571428571429
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8771428571428571
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9142857142857143
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.71
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28095238095238095
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1754285714285714
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09142857142857141
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.71
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8428571428571429
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8771428571428571
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9142857142857143
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8151955748060781
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.783174603174603
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7866554834362436
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.7028571428571428
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8457142857142858
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.88
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9157142857142857
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7028571428571428
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2819047619047619
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.176
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09157142857142857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7028571428571428
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8457142857142858
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.88
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9157142857142857
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8131832672898918
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7799625850340134
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7833067978748278
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.6985714285714286
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8457142857142858
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8785714285714286
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9071428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6985714285714286
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2819047619047619
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17571428571428568
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0907142857142857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6985714285714286
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8457142857142858
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8785714285714286
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9071428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8072080679843728
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7746224489795912
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7782328948106179
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6914285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8428571428571429
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8714285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9057142857142857
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6914285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28095238095238095
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17428571428571427
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09057142857142855
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6914285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8428571428571429
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8714285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9057142857142857
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.80532196181792
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7725623582766435
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7764353709024747
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6757142857142857
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8114285714285714
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.85
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8842857142857142
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6757142857142857
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2704761904761904
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08842857142857141
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6757142857142857
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8114285714285714
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.85
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8842857142857142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7835900962247281
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7508775510204081
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7557906355020412
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("NickyNicky/bge-base-financial-matryoshka")
# Run inference
sentences = [
'Information on legal proceedings is included in Contact Email PRIOR HISTORY: None PLACEHOLDER FOR ARBITRATION.',
'Where can information about legal proceedings be found in the financial statements?',
'What remaining authorization amount was available for share repurchases as of January 28, 2023?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.71 |
| cosine_accuracy@3 | 0.8429 |
| cosine_accuracy@5 | 0.8771 |
| cosine_accuracy@10 | 0.9143 |
| cosine_precision@1 | 0.71 |
| cosine_precision@3 | 0.281 |
| cosine_precision@5 | 0.1754 |
| cosine_precision@10 | 0.0914 |
| cosine_recall@1 | 0.71 |
| cosine_recall@3 | 0.8429 |
| cosine_recall@5 | 0.8771 |
| cosine_recall@10 | 0.9143 |
| cosine_ndcg@10 | 0.8152 |
| cosine_mrr@10 | 0.7832 |
| **cosine_map@100** | **0.7867** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7029 |
| cosine_accuracy@3 | 0.8457 |
| cosine_accuracy@5 | 0.88 |
| cosine_accuracy@10 | 0.9157 |
| cosine_precision@1 | 0.7029 |
| cosine_precision@3 | 0.2819 |
| cosine_precision@5 | 0.176 |
| cosine_precision@10 | 0.0916 |
| cosine_recall@1 | 0.7029 |
| cosine_recall@3 | 0.8457 |
| cosine_recall@5 | 0.88 |
| cosine_recall@10 | 0.9157 |
| cosine_ndcg@10 | 0.8132 |
| cosine_mrr@10 | 0.78 |
| **cosine_map@100** | **0.7833** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6986 |
| cosine_accuracy@3 | 0.8457 |
| cosine_accuracy@5 | 0.8786 |
| cosine_accuracy@10 | 0.9071 |
| cosine_precision@1 | 0.6986 |
| cosine_precision@3 | 0.2819 |
| cosine_precision@5 | 0.1757 |
| cosine_precision@10 | 0.0907 |
| cosine_recall@1 | 0.6986 |
| cosine_recall@3 | 0.8457 |
| cosine_recall@5 | 0.8786 |
| cosine_recall@10 | 0.9071 |
| cosine_ndcg@10 | 0.8072 |
| cosine_mrr@10 | 0.7746 |
| **cosine_map@100** | **0.7782** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6914 |
| cosine_accuracy@3 | 0.8429 |
| cosine_accuracy@5 | 0.8714 |
| cosine_accuracy@10 | 0.9057 |
| cosine_precision@1 | 0.6914 |
| cosine_precision@3 | 0.281 |
| cosine_precision@5 | 0.1743 |
| cosine_precision@10 | 0.0906 |
| cosine_recall@1 | 0.6914 |
| cosine_recall@3 | 0.8429 |
| cosine_recall@5 | 0.8714 |
| cosine_recall@10 | 0.9057 |
| cosine_ndcg@10 | 0.8053 |
| cosine_mrr@10 | 0.7726 |
| **cosine_map@100** | **0.7764** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6757 |
| cosine_accuracy@3 | 0.8114 |
| cosine_accuracy@5 | 0.85 |
| cosine_accuracy@10 | 0.8843 |
| cosine_precision@1 | 0.6757 |
| cosine_precision@3 | 0.2705 |
| cosine_precision@5 | 0.17 |
| cosine_precision@10 | 0.0884 |
| cosine_recall@1 | 0.6757 |
| cosine_recall@3 | 0.8114 |
| cosine_recall@5 | 0.85 |
| cosine_recall@10 | 0.8843 |
| cosine_ndcg@10 | 0.7836 |
| cosine_mrr@10 | 0.7509 |
| **cosine_map@100** | **0.7558** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,300 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 47.19 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 20.59 tokens</li><li>max: 41 tokens</li></ul> |
* Samples:
| positive | anchor |
|:----------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|
| <code>For the year ended December 31, 2023, $305 million was recorded as a distribution against retained earnings for dividends.</code> | <code>How much in dividends was recorded against retained earnings in 2023?</code> |
| <code>In February 2023, we announced a 10% increase in our quarterly cash dividend to $2.09 per share.</code> | <code>By how much did the company increase its quarterly cash dividend in February 2023?</code> |
| <code>Depreciation and amortization totaled $4,856 as recorded in the financial statements.</code> | <code>How much did depreciation and amortization total to in the financial statements?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 40
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 20
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 40
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 20
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:-------:|:----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.9114 | 9 | - | 0.7124 | 0.7361 | 0.7366 | 0.6672 | 0.7443 |
| 1.0127 | 10 | 2.0952 | - | - | - | - | - |
| 1.9241 | 19 | - | 0.7437 | 0.7561 | 0.7628 | 0.7172 | 0.7653 |
| 2.0253 | 20 | 1.1175 | - | - | - | - | - |
| 2.9367 | 29 | - | 0.7623 | 0.7733 | 0.7694 | 0.7288 | 0.7723 |
| 3.0380 | 30 | 0.6104 | - | - | - | - | - |
| 3.9494 | 39 | - | 0.7723 | 0.7746 | 0.7804 | 0.7405 | 0.7789 |
| 4.0506 | 40 | 0.4106 | - | - | - | - | - |
| 4.9620 | 49 | - | 0.7777 | 0.7759 | 0.7820 | 0.7475 | 0.7842 |
| 5.0633 | 50 | 0.314 | - | - | - | - | - |
| 5.9747 | 59 | - | 0.7802 | 0.7796 | 0.7856 | 0.7548 | 0.7839 |
| 6.0759 | 60 | 0.2423 | - | - | - | - | - |
| 6.9873 | 69 | - | 0.7756 | 0.7772 | 0.7834 | 0.7535 | 0.7818 |
| 7.0886 | 70 | 0.1962 | - | - | - | - | - |
| 8.0 | 79 | - | 0.7741 | 0.7774 | 0.7841 | 0.7551 | 0.7822 |
| 8.1013 | 80 | 0.1627 | - | - | - | - | - |
| 8.9114 | 88 | - | 0.7724 | 0.7752 | 0.7796 | 0.7528 | 0.7816 |
| 9.1139 | 90 | 0.1379 | - | - | - | - | - |
| 9.9241 | 98 | - | 0.7691 | 0.7782 | 0.7834 | 0.7559 | 0.7836 |
| 10.1266 | 100 | 0.1249 | - | - | - | - | - |
| 10.9367 | 108 | - | 0.7728 | 0.7802 | 0.7831 | 0.7536 | 0.7848 |
| 11.1392 | 110 | 0.1105 | - | - | - | - | - |
| 11.9494 | 118 | - | 0.7748 | 0.7785 | 0.7814 | 0.7558 | 0.7851 |
| 12.1519 | 120 | 0.1147 | - | - | - | - | - |
| 12.9620 | 128 | - | 0.7756 | 0.7788 | 0.7839 | 0.7550 | 0.7864 |
| 13.1646 | 130 | 0.098 | - | - | - | - | - |
| 13.9747 | 138 | - | 0.7767 | 0.7792 | 0.7828 | 0.7557 | 0.7873 |
| 14.1772 | 140 | 0.0927 | - | - | - | - | - |
| 14.9873 | 148 | - | 0.7758 | 0.7804 | 0.7847 | 0.7569 | 0.7892 |
| 15.1899 | 150 | 0.0921 | - | - | - | - | - |
| 16.0 | 158 | - | 0.7760 | 0.7794 | 0.7831 | 0.7551 | 0.7873 |
| 16.2025 | 160 | 0.0896 | - | - | - | - | - |
| 16.9114 | 167 | - | 0.7753 | 0.7799 | 0.7841 | 0.7570 | 0.7888 |
| 17.2152 | 170 | 0.0881 | - | - | - | - | - |
| 17.9241 | 177 | - | 0.7763 | 0.7787 | 0.7842 | 0.7561 | 0.7867 |
| 18.2278 | 180 | 0.0884 | 0.7764 | 0.7782 | 0.7833 | 0.7558 | 0.7867 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.2.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
DewEfresh/Neo_7b-merge12 | DewEfresh | 2024-06-30T23:16:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T23:04:27Z | ---
tags:
- merge
- mergekit
- lazymergekit
---
# Neo_7b-merge12
Neo_7b-merge12 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: DewEfresh/neo_7b
- model: DewEfresh/neo_7b
merge_method: slerp
base_model: DewEfresh/neo_7b
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DewEfresh/Neo_7b-merge12"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
sirnii/chika | sirnii | 2024-06-30T23:15:10Z | 0 | 0 | null | [
"music",
"code",
"not-for-all-audiences",
"en",
"pt",
"ja",
"doi:10.57967/hf/2663",
"license:mit",
"region:us"
] | null | 2024-06-30T23:04:51Z | ---
license: mit
language:
- en
- pt
- ja
tags:
- music
- code
- not-for-all-audiences
--- |
Rimou2002/lora_model | Rimou2002 | 2024-06-30T23:05:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T23:05:34Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Rimou2002
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
valerielucro/mistral_gsm8k_sample_sft_and_dpo_3 | valerielucro | 2024-06-30T23:06:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T23:06:30Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tklohj/WindyFloLLM | tklohj | 2024-06-30T23:18:12Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-06-30T23:18:12Z | ---
license: mit
---
|
Pra-tham/whisper-peft-full-labelled_v2 | Pra-tham | 2024-06-30T23:19:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T23:18:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/166043143051 | habulaj | 2024-06-30T23:19:26Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:19:23Z | Entry not found |
habulaj/1643130202 | habulaj | 2024-06-30T23:19:41Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:19:36Z | Entry not found |
NickyNicky/bge-base-financial-matryoshka_test_4 | NickyNicky | 2024-06-30T23:20:56Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2024-06-30T23:20:17Z | ---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: A number of factors may impact ESKD growth rates, including mortality
rates for dialysis patients or CKD patients, the aging of the U.S. population,
transplant rates, incidence rates for diseases that cause kidney failure such
as diabetes and hypertension, growth rates of minority populations with higher
than average incidence rates of ESKD.
sentences:
- By how much did the company increase its quarterly cash dividend in February 2023?
- What factors may impact the growth rates of the ESKD patient population?
- What percentage increase did salaries and related costs experience at Delta Air
Lines from 2022 to 2023?
- source_sentence: HIV product sales increased 6% to $18.2 billion in 2023, compared
to 2022.
sentences:
- What were the present values of lease liabilities for operating and finance leases
as of December 31, 2023?
- By what percentage did HIV product sales increase in 2023 compared to the previous
year?
- How is interest income not attributable to the Card Member loan portfolio primarily
represented in financial documents?
- source_sentence: If a violation is found, a broad range of remedies is potentially
available to the Commission and/or CMA, including imposing a fine and/or the prohibition
or restriction of certain business practices.
sentences:
- What are the potential remedies if a violation is found by the European Commission
or the U.K. Competition and Markets Authority in their investigation of automotive
companies?
- By which auditing standards were the consolidated financial statements of Salesforce,
Inc. audited?
- What is the main role of Kroger's Chief Executive Officer in the company?
- source_sentence: The discussion in Hewlett Packard Enterprise's Form 10-K highlights
factors impacting costs and revenues, including easing supply chain constraints,
foreign exchange pressures, inflationary trends, and recent tax developments potentially
affecting their financial outcomes.
sentences:
- Is the outcome of the investigation into Tesla's waste segregation practices currently
determinable?
- How does Hewlett Packard Enterprise justify the exclusion of transformation costs
from its non-GAAP financial measures?
- In the context of Hewlett Packard Enterprise's recent financial discussions, what
factors are expected to impact their operational costs and revenue growth moving
forward?
- source_sentence: Our Records Management and Data Management service revenue growth
is being negatively impacted by declining activity rates as stored records and
tapes are becoming less active and more archival.
sentences:
- How is Iron Mountain addressing the decline in activity rates in their Records
and Data Management services?
- What services do companies that build fiber-based networks provide in the Connectivity
& Platforms markets?
- What business outcomes is HPE focused on accelerating with its technological solutions?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.7057142857142857
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8457142857142858
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8785714285714286
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9114285714285715
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7057142857142857
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2819047619047619
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17571428571428568
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09114285714285714
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7057142857142857
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8457142857142858
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8785714285714286
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9114285714285715
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8125296344519609
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7804263038548749
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7839408125709297
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.7071428571428572
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8428571428571429
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8742857142857143
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9114285714285715
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7071428571428572
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28095238095238095
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17485714285714282
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09114285714285714
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7071428571428572
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8428571428571429
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8742857142857143
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9114285714285715
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8126517351231356
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7807267573696143
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7841188299664252
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.7028571428571428
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8357142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8685714285714285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9071428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7028571428571428
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2785714285714286
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1737142857142857
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09071428571428572
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7028571428571428
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8357142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8685714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9071428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8086618947757659
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7768820861678005
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7806177775944575
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6914285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.82
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8557142857142858
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9014285714285715
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6914285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2733333333333334
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17114285714285712
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09014285714285714
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6914285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.82
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8557142857142858
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9014285714285715
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7980982703041672
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7650045351473919
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7688564414027702
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6542857142857142
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7885714285714286
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8328571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8828571428571429
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6542857142857142
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26285714285714284
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16657142857142856
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08828571428571427
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6542857142857142
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7885714285714286
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8328571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8828571428571429
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7689665884678363
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7325351473922898
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7369423610264151
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("NickyNicky/bge-base-financial-matryoshka")
# Run inference
sentences = [
'Our Records Management and Data Management service revenue growth is being negatively impacted by declining activity rates as stored records and tapes are becoming less active and more archival.',
'How is Iron Mountain addressing the decline in activity rates in their Records and Data Management services?',
'What services do companies that build fiber-based networks provide in the Connectivity & Platforms markets?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7057 |
| cosine_accuracy@3 | 0.8457 |
| cosine_accuracy@5 | 0.8786 |
| cosine_accuracy@10 | 0.9114 |
| cosine_precision@1 | 0.7057 |
| cosine_precision@3 | 0.2819 |
| cosine_precision@5 | 0.1757 |
| cosine_precision@10 | 0.0911 |
| cosine_recall@1 | 0.7057 |
| cosine_recall@3 | 0.8457 |
| cosine_recall@5 | 0.8786 |
| cosine_recall@10 | 0.9114 |
| cosine_ndcg@10 | 0.8125 |
| cosine_mrr@10 | 0.7804 |
| **cosine_map@100** | **0.7839** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7071 |
| cosine_accuracy@3 | 0.8429 |
| cosine_accuracy@5 | 0.8743 |
| cosine_accuracy@10 | 0.9114 |
| cosine_precision@1 | 0.7071 |
| cosine_precision@3 | 0.281 |
| cosine_precision@5 | 0.1749 |
| cosine_precision@10 | 0.0911 |
| cosine_recall@1 | 0.7071 |
| cosine_recall@3 | 0.8429 |
| cosine_recall@5 | 0.8743 |
| cosine_recall@10 | 0.9114 |
| cosine_ndcg@10 | 0.8127 |
| cosine_mrr@10 | 0.7807 |
| **cosine_map@100** | **0.7841** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7029 |
| cosine_accuracy@3 | 0.8357 |
| cosine_accuracy@5 | 0.8686 |
| cosine_accuracy@10 | 0.9071 |
| cosine_precision@1 | 0.7029 |
| cosine_precision@3 | 0.2786 |
| cosine_precision@5 | 0.1737 |
| cosine_precision@10 | 0.0907 |
| cosine_recall@1 | 0.7029 |
| cosine_recall@3 | 0.8357 |
| cosine_recall@5 | 0.8686 |
| cosine_recall@10 | 0.9071 |
| cosine_ndcg@10 | 0.8087 |
| cosine_mrr@10 | 0.7769 |
| **cosine_map@100** | **0.7806** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6914 |
| cosine_accuracy@3 | 0.82 |
| cosine_accuracy@5 | 0.8557 |
| cosine_accuracy@10 | 0.9014 |
| cosine_precision@1 | 0.6914 |
| cosine_precision@3 | 0.2733 |
| cosine_precision@5 | 0.1711 |
| cosine_precision@10 | 0.0901 |
| cosine_recall@1 | 0.6914 |
| cosine_recall@3 | 0.82 |
| cosine_recall@5 | 0.8557 |
| cosine_recall@10 | 0.9014 |
| cosine_ndcg@10 | 0.7981 |
| cosine_mrr@10 | 0.765 |
| **cosine_map@100** | **0.7689** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6543 |
| cosine_accuracy@3 | 0.7886 |
| cosine_accuracy@5 | 0.8329 |
| cosine_accuracy@10 | 0.8829 |
| cosine_precision@1 | 0.6543 |
| cosine_precision@3 | 0.2629 |
| cosine_precision@5 | 0.1666 |
| cosine_precision@10 | 0.0883 |
| cosine_recall@1 | 0.6543 |
| cosine_recall@3 | 0.7886 |
| cosine_recall@5 | 0.8329 |
| cosine_recall@10 | 0.8829 |
| cosine_ndcg@10 | 0.769 |
| cosine_mrr@10 | 0.7325 |
| **cosine_map@100** | **0.7369** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,300 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 46.55 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 20.56 tokens</li><li>max: 42 tokens</li></ul> |
* Samples:
| positive | anchor |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------|
| <code>Internationally, Visa Inc.'s commercial payments volume grew by 23% from $407 billion in 2021 to $500 billion in 2022.</code> | <code>What was the growth rate of Visa Inc.'s commercial payments volume internationally between 2021 and 2022?</code> |
| <code>The consolidated financial statements and accompanying notes listed in Part IV, Item 15(a)(1) of this Annual Report on Form 10-K are included immediately following Part IV hereof.</code> | <code>Where can one find the consolidated financial statements and accompanying notes in the Annual Report on Form 10-K?</code> |
| <code>The additional paid-in capital at the end of 2023 was recorded as $114,519 million.</code> | <code>What was the amount recorded for additional paid-in capital at the end of 2023?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 80
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 15
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 80
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 15
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:-------:|:----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.8101 | 4 | - | 0.7066 | 0.7309 | 0.7390 | 0.6462 | 0.7441 |
| 1.8228 | 9 | - | 0.7394 | 0.7497 | 0.7630 | 0.6922 | 0.7650 |
| 2.0253 | 10 | 2.768 | - | - | - | - | - |
| 2.8354 | 14 | - | 0.7502 | 0.7625 | 0.7767 | 0.7208 | 0.7787 |
| 3.8481 | 19 | - | 0.7553 | 0.7714 | 0.7804 | 0.7234 | 0.7802 |
| 4.0506 | 20 | 1.1294 | - | - | - | - | - |
| 4.8608 | 24 | - | 0.7577 | 0.7769 | 0.7831 | 0.7327 | 0.7858 |
| 5.8734 | 29 | - | 0.7616 | 0.7775 | 0.7832 | 0.7335 | 0.7876 |
| 6.0759 | 30 | 0.7536 | - | - | - | - | - |
| 6.8861 | 34 | - | 0.7624 | 0.7788 | 0.7832 | 0.7352 | 0.7882 |
| 7.8987 | 39 | - | 0.7665 | 0.7795 | 0.7814 | 0.7359 | 0.7861 |
| 8.1013 | 40 | 0.5846 | - | - | - | - | - |
| 8.9114 | 44 | - | 0.7688 | 0.7801 | 0.7828 | 0.7360 | 0.7857 |
| 9.9241 | 49 | - | 0.7698 | 0.7804 | 0.7836 | 0.7367 | 0.7840 |
| 10.1266 | 50 | 0.5187 | - | - | - | - | - |
| 10.9367 | 54 | - | 0.7692 | 0.7801 | 0.7827 | 0.7383 | 0.7837 |
| 11.9494 | 59 | - | 0.7698 | 0.7801 | 0.7834 | 0.7377 | 0.7849 |
| 12.1519 | 60 | 0.4949 | 0.7689 | 0.7806 | 0.7841 | 0.7369 | 0.7839 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.2.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
habulaj/141048117168 | habulaj | 2024-06-30T23:27:30Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:27:27Z | Entry not found |
JesseGuerrero/example-model | JesseGuerrero | 2024-06-30T23:30:30Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:29:19Z | # Example Model
This is my model card
---
license: mit
---
|
noahtye/testcreatemodel | noahtye | 2024-07-01T18:32:12Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-06-30T23:29:21Z | Entry not found |
megagarra/Olivia | megagarra | 2024-06-30T23:34:41Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:30:17Z | Entry not found |
habulaj/894910220 | habulaj | 2024-06-30T23:30:50Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:30:45Z | Entry not found |
asdasddsa243634/WEDWQ | asdasddsa243634 | 2024-06-30T23:30:50Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:30:50Z | Entry not found |
Sjsjjddj887/Davekerr | Sjsjjddj887 | 2024-06-30T23:41:25Z | 0 | 0 | null | [
"license:llama2",
"region:us"
] | null | 2024-06-30T23:41:25Z | ---
license: llama2
---
|
apwic/summarization-base-1 | apwic | 2024-07-01T03:04:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"id",
"base_model:LazarusNLP/IndoNanoT5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-06-30T23:41:55Z | ---
language:
- id
license: apache-2.0
base_model: LazarusNLP/IndoNanoT5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarization-base-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization-base-1
This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5103
- Rouge1: 0.4427
- Rouge2: 0.0
- Rougel: 0.4423
- Rougelsum: 0.4403
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.6316 | 1.0 | 3566 | 0.4807 | 0.4602 | 0.0 | 0.4604 | 0.4565 | 1.0 |
| 0.4336 | 2.0 | 7132 | 0.4717 | 0.4661 | 0.0 | 0.466 | 0.4622 | 1.0 |
| 0.3363 | 3.0 | 10698 | 0.4723 | 0.4799 | 0.0 | 0.479 | 0.4762 | 1.0 |
| 0.2656 | 4.0 | 14264 | 0.4825 | 0.4713 | 0.0 | 0.4703 | 0.4666 | 1.0 |
| 0.219 | 5.0 | 17830 | 0.5103 | 0.4427 | 0.0 | 0.4423 | 0.4403 | 1.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
tenyaiida/MysteriousKorn | tenyaiida | 2024-07-01T01:26:51Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-06-30T23:42:31Z | ---
license: mit
---
|
TomEijkelenkamp/renaissance-deepseek-composition | TomEijkelenkamp | 2024-06-30T23:45:02Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:45:02Z | Entry not found |
Rimou2002/lora_modell | Rimou2002 | 2024-06-30T23:47:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T23:46:54Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Rimou2002
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Navya1602/extraversion_model_llama2 | Navya1602 | 2024-06-30T23:48:13Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-06-30T23:48:06Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
habulaj/140993117118 | habulaj | 2024-06-30T23:49:09Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:49:01Z | Entry not found |
habulaj/8229476372 | habulaj | 2024-06-30T23:50:25Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:50:19Z | Entry not found |
habulaj/8290360379 | habulaj | 2024-06-30T23:50:40Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:50:37Z | Entry not found |
movenb3at/NMH | movenb3at | 2024-06-30T23:52:18Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T23:51:51Z | Entry not found |
nachors/llama-finetuned | nachors | 2024-06-30T23:57:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:acunamartin1426/hola",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T23:57:06Z | ---
base_model: acunamartin1426/hola
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** nachors
- **License:** apache-2.0
- **Finetuned from model :** acunamartin1426/hola
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jmmhhaemaglobal/weather | jmmhhaemaglobal | 2024-06-30T23:57:35Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-06-30T23:57:35Z | ---
license: mit
---
|
sert121/llama3-lora-aligned-orpo-4epochs | sert121 | 2024-06-30T23:58:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"orpo",
"generated_from_trainer",
"base_model:defog/llama-3-sqlcoder-8b",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2024-06-30T23:58:39Z | ---
base_model: defog/llama-3-sqlcoder-8b
library_name: peft
license: cc-by-sa-4.0
tags:
- trl
- orpo
- generated_from_trainer
model-index:
- name: llama3-lora-aligned-orpo-4epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sert121/huggingface/runs/arfiqsza)
# llama3-lora-aligned-orpo-4epochs
This model is a fine-tuned version of [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.1
- Pytorch 2.3.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1 |
kmpartner/xs09-lcmdistill-test | kmpartner | 2024-07-03T00:20:28Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-06-30T23:59:44Z | Entry not found |
Hazza1/Walter | Hazza1 | 2024-07-01T00:00:23Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:00:19Z | Entry not found |
habulaj/4987239320 | habulaj | 2024-07-01T00:02:50Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:02:48Z | Entry not found |
nyllarussell/Gerald | nyllarussell | 2024-07-01T00:05:28Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:05:28Z | Entry not found |
habulaj/184280158654 | habulaj | 2024-07-01T00:05:58Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:05:55Z | Entry not found |
habulaj/248605219766 | habulaj | 2024-07-01T00:13:18Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:13:17Z | Entry not found |
Eminnky/Kismet | Eminnky | 2024-07-01T00:13:37Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T00:13:37Z | ---
license: apache-2.0
---
|
LarryAIDraw/xlMerges_xl011 | LarryAIDraw | 2024-07-01T00:30:28Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-07-01T00:15:41Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/511045?modelVersionId=606777 |
mjkenney/my-gemma-2-arc-finetuned-model | mjkenney | 2024-07-01T00:19:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-07-01T00:15:45Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/8532363792 | habulaj | 2024-07-01T00:22:44Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:22:38Z | Entry not found |
Ramikan-BR/TiamaPY-LORA-v41 | Ramikan-BR | 2024-07-01T00:25:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T00:24:22Z | ---
base_model: unsloth/tinyllama-chat-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
XinHun/Frieren | XinHun | 2024-07-01T00:27:06Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-07-01T00:25:16Z | ---
license: other
license_name: '01'
license_link: LICENSE
---
|
00BER/llama-3-8b-pretrained | 00BER | 2024-07-01T00:27:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T00:26:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
congminh2456/dexyan_cap | congminh2456 | 2024-07-03T00:54:13Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:30:05Z | Entry not found |
LarryAIDraw/HeroineXX | LarryAIDraw | 2024-07-01T00:33:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-07-01T00:30:56Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/545726/mysterious-heroine-x-xx-3-outfits-fate-grand-order |
LarryAIDraw/Mahiru_Shiina-37 | LarryAIDraw | 2024-07-01T00:33:59Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-07-01T00:31:26Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/427739?modelVersionId=606988 |
jdnicoll1/wolof-to-english-translation-v1 | jdnicoll1 | 2024-07-02T02:41:26Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-01T00:38:23Z | Entry not found |
habulaj/235488206773 | habulaj | 2024-07-01T00:39:11Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:39:01Z | Entry not found |
YongjieNiu/prior-SELU-adl-cat-1-500 | YongjieNiu | 2024-07-01T00:42:31Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:42:31Z | Entry not found |
Hazza1/EdithFinch | Hazza1 | 2024-07-01T00:45:35Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:45:26Z | Entry not found |
dwb2023/paligemma-cnmc-ft | dwb2023 | 2024-07-02T11:18:53Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-224",
"license:gemma",
"region:us"
] | null | 2024-07-01T00:46:03Z | ---
base_model: google/paligemma-3b-pt-224
library_name: peft
license: gemma
tags:
- generated_from_trainer
model-index:
- name: paligemma-cnmc-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma-cnmc-ft
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 170
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| No log | 0.9645 | 17 | 1.3682 |
| No log | 1.9858 | 35 | 1.1198 |
| 1.1664 | 2.9504 | 52 | 0.6518 |
| 1.1664 | 3.9716 | 70 | 0.3661 |
| 1.1664 | 4.9929 | 88 | 0.3079 |
| 0.3897 | 5.9574 | 105 | 0.2835 |
| 0.3897 | 6.9787 | 123 | 0.2548 |
| 0.3897 | 8.0 | 141 | 0.2513 |
| 0.2665 | 8.9645 | 158 | 0.2098 |
| 0.2665 | 9.9858 | 176 | 0.2031 |
| 0.2665 | 10.9504 | 193 | 0.2482 |
| 0.1931 | 11.9716 | 211 | 0.2339 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.43.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
khanhnn55/naschainv9 | khanhnn55 | 2024-07-01T04:45:12Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:53:27Z | Entry not found |
wjf563745940/blm | wjf563745940 | 2024-07-01T00:57:04Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:57:04Z | Entry not found |
Reikoow/ReikoModel | Reikoow | 2024-07-01T00:57:19Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"gguf",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T00:57:18Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** Reikoow
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AperiJessyca/uvci_Koumankan_mt_dyu_fr | AperiJessyca | 2024-07-01T00:57:43Z | 0 | 1 | null | [
"region:us"
] | null | 2024-07-01T00:57:43Z | Entry not found |
Samdulx21/Object-Detection | Samdulx21 | 2024-07-01T00:59:53Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T00:59:53Z | Entry not found |
1231czx/7b_dpo_iter2_4e7_from_sft1epoch_step150 | 1231czx | 2024-07-01T01:03:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T01:00:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mmbutera/karina | mmbutera | 2024-07-01T01:04:00Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-01T01:02:39Z | ---
license: openrail
---
|
Quant-Cartel/Llama-3-TenyxChat-DaybreakStorywriter-70B-exl2-rpcal | Quant-Cartel | 2024-07-02T02:20:53Z | 0 | 0 | null | [
"not-for-all-audiences",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-07-01T01:05:18Z | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
---
```
e88 88e d8
d888 888b 8888 8888 ,"Y88b 888 8e d88
C8888 8888D 8888 8888 "8" 888 888 88b d88888
Y888 888P Y888 888P ,ee 888 888 888 888
"88 88" "88 88" "88 888 888 888 888
b
8b,
e88'Y88 d8 888
d888 'Y ,"Y88b 888,8, d88 ,e e, 888
C8888 "8" 888 888 " d88888 d88 88b 888
Y888 ,d ,ee 888 888 888 888 , 888
"88,d88 "88 888 888 888 "YeeP" 888
PROUDLY PRESENTS
```
# Llama-3-TenyxChat-DaybreakStorywriter-70B-exl2-rpcal
Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset.
Branches:
- `main` -- `measurement.json`
- `6b8h` -- 6bpw, 8bit lm_head
- `4.65b6h` -- 4.65bpw, 6bit lm_head
- `2.25b6h` -- 2.25bpw, 6bit lm_head
Original model link: [Envoid/Llama-3-TenyxChat-DaybreakStorywriter-70B](https://huggingface.co/Envoid/Llama-3-TenyxChat-DaybreakStorywriter-70B)
### Quanter's notes
As apparently the default dataset is supposed to be better in nearly all situations, I decided to start quanting using that in addition to my standard rpcal-fare. I'd appreciate real-world tests to confirm the hypothesis, though, so please leave a comment if you find rpcal to be better than what I've dubbed 'longcal'.
Original model README below.
-----
## Caution: This model is capable of producing adult content.
This model is a 50/50 SLERP merge between [crestf411/L3-70B-daybreak-storywriter-v0.4](https://huggingface.co/crestf411/L3-70B-daybreak-storywriter-v0.4)
and
[tenyx/Llama3-TenyxChat-70B](https://huggingface.co/tenyx/Llama3-TenyxChat-70B)
The resulting model scores significantly higher on the super top secret, private **NALA** evaluation *(Neural-linguistic Assessment of Lifelike Approximation)*<sup>[1]</sup> making it a great choice for novelty RP scenarios.
**TenyxChat-DaybreakStorywriter: 76.52**
DeepSeek-Coder-V2-Instruct: 68.20
TenyxChat: 57.89
This model utilizes the Llama-3-Instruct prompt format.
<sup>1. The NALA evaluation is not a proper scientific evaluation and should not be used to inform any decisions related to personal safety, personal enjoyment, or any other critical or non-critical matter. NALA score is entirely arbitrary and subject to change without notice.</sup>
|
xjw1001001/HC_OVarian | xjw1001001 | 2024-07-01T01:10:59Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T01:10:07Z | Entry not found |
khalidr/xlm-roberta-base-finetuned-panx-de | khalidr | 2024-07-02T22:40:57Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-01T01:17:24Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8627339761769711
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1383
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2614 | 1.0 | 525 | 0.1544 | 0.8270 |
| 0.1292 | 2.0 | 1050 | 0.1322 | 0.8537 |
| 0.0804 | 3.0 | 1575 | 0.1383 | 0.8627 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 1.16.1
- Tokenizers 0.19.1
|
Dorjkhnd/Large_57k_whisper | Dorjkhnd | 2024-07-01T01:19:53Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T01:19:53Z | Entry not found |
AIEKEK/distilbert-base-uncased-distilled-clinc | AIEKEK | 2024-07-01T02:25:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-01T01:21:07Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0352
- Accuracy: 0.9345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8211 | 1.0 | 318 | 0.4201 | 0.6955 |
| 0.314 | 2.0 | 636 | 0.1452 | 0.8432 |
| 0.1453 | 3.0 | 954 | 0.0744 | 0.8961 |
| 0.0935 | 4.0 | 1272 | 0.0544 | 0.9203 |
| 0.0732 | 5.0 | 1590 | 0.0451 | 0.9255 |
| 0.0633 | 6.0 | 1908 | 0.0405 | 0.9306 |
| 0.0574 | 7.0 | 2226 | 0.0378 | 0.9332 |
| 0.0535 | 8.0 | 2544 | 0.0363 | 0.9345 |
| 0.0517 | 9.0 | 2862 | 0.0355 | 0.9345 |
| 0.0503 | 10.0 | 3180 | 0.0352 | 0.9345 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
Sofronie/test_01 | Sofronie | 2024-07-01T01:25:48Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-01T01:25:48Z | ---
license: mit
---
|
Albertor0710/Santo-208 | Albertor0710 | 2024-07-01T01:27:32Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T01:27:32Z | ---
license: apache-2.0
---
|
manishaaaaa/llama3model | manishaaaaa | 2024-07-01T01:32:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-07-01T01:29:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/7545455020 | habulaj | 2024-07-01T01:30:57Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T01:30:52Z | Entry not found |
Zhussip/Lookover | Zhussip | 2024-07-01T01:47:36Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"finance",
"text-classification",
"en",
"ru",
"dataset:OpenGVLab/ShareGPT-4o",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-07-01T01:34:21Z | ---
license: apache-2.0
datasets:
- OpenGVLab/ShareGPT-4o
language:
- en
- ru
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-classification
tags:
- finance
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
X0x0G/Ashesi_Capstone | X0x0G | 2024-07-01T01:49:03Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T01:35:56Z | Entry not found |
habulaj/182782157325 | habulaj | 2024-07-01T01:36:50Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T01:36:47Z | Entry not found |
andyIbr/example-model | andyIbr | 2024-07-01T01:57:13Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T01:37:12Z | # example model
This is my model card README
---
license: mit
---
|
Pradyumn/learned_cat | Pradyumn | 2024-07-01T01:37:14Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T01:37:14Z | Entry not found |
vaishnavi514/results_metrics_distilbert | vaishnavi514 | 2024-07-01T03:38:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-01T01:39:24Z | Entry not found |
sert121/llama3-lora-aligned-orpo-beta-0.2 | sert121 | 2024-07-01T01:41:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"orpo",
"generated_from_trainer",
"base_model:defog/llama-3-sqlcoder-8b",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2024-07-01T01:41:27Z | ---
base_model: defog/llama-3-sqlcoder-8b
library_name: peft
license: cc-by-sa-4.0
tags:
- trl
- orpo
- generated_from_trainer
model-index:
- name: llama3-lora-aligned-orpo-beta-0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sert121/huggingface/runs/3jm3ojo8)
# llama3-lora-aligned-orpo-beta-0.2
This model is a fine-tuned version of [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.1
- Pytorch 2.3.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1 |
LuuNgoc2k2/ViNER-MDeberta-v2 | LuuNgoc2k2 | 2024-07-01T01:52:20Z | 0 | 1 | null | [
"pytorch",
"region:us"
] | null | 2024-07-01T01:50:47Z | Entry not found |
Blank0123/01 | Blank0123 | 2024-07-01T01:53:04Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T01:53:04Z | Entry not found |
SotaChambers/training_test | SotaChambers | 2024-07-01T02:45:26Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T01:54:29Z | Entry not found |
Aquasquirel/teste | Aquasquirel | 2024-07-01T01:57:07Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T01:57:07Z | Entry not found |
peitongd/model | peitongd | 2024-07-01T02:02:30Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T02:02:28Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** peitongd
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yuki-Kokomi/OpenECAD-SigLIP-2.4B | Yuki-Kokomi | 2024-07-01T02:12:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"tinyllava",
"text-generation",
"conversational",
"custom_code",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-07-01T02:06:15Z | Entry not found |
sgdkn/pose-classification-hp | sgdkn | 2024-07-01T03:12:08Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-07-01T02:07:04Z | Entry not found |
habulaj/135575171256 | habulaj | 2024-07-01T02:07:12Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T02:07:07Z | Entry not found |
Yeongtak/tdpo_baseline | Yeongtak | 2024-07-01T02:22:46Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T02:10:38Z | Entry not found |
habulaj/2176521560 | habulaj | 2024-07-01T02:10:50Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T02:10:43Z | Entry not found |
elliotthwangmsa/gemma-chinese | elliotthwangmsa | 2024-07-01T02:10:44Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T02:10:44Z | Entry not found |
Yeongtak/tdpo_geodesic | Yeongtak | 2024-07-01T02:21:15Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T02:11:14Z | Entry not found |
houbw/llama3_8b_bnb_4bit_ruozhiba_method_9 | houbw | 2024-07-01T02:12:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T02:12:02Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** houbw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dave1024/Qwen-Qwen1.5-0.5B-1719800134 | dave1024 | 2024-07-01T02:15:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2024-07-01T02:15:34Z | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
dave1024/Qwen-Qwen1.5-1.8B-1719800215 | dave1024 | 2024-07-01T02:17:02Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2024-07-01T02:16:55Z | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
zeus69/Zeus | zeus69 | 2024-07-01T02:17:26Z | 0 | 0 | null | [
"license:wtfpl",
"region:us"
] | null | 2024-07-01T02:17:26Z | ---
license: wtfpl
---
|
dave1024/Qwen-Qwen1.5-7B-1719800344 | dave1024 | 2024-07-01T02:19:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"region:us"
] | null | 2024-07-01T02:19:04Z | ---
base_model: Qwen/Qwen1.5-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
dave1024/google-gemma-2b-1719800420 | dave1024 | 2024-07-01T02:20:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"region:us"
] | null | 2024-07-01T02:20:20Z | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
juanquivilla/deberta-base-en-wiki | juanquivilla | 2024-07-01T02:55:11Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T02:20:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anupam69/fined-tuned-mistral | anupam69 | 2024-07-01T02:24:42Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T02:22:01Z | ---
license: apache-2.0
---
|
habulaj/10056192054 | habulaj | 2024-07-01T02:25:41Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T02:25:37Z | Entry not found |
dave1024/Qwen-Qwen1.5-0.5B-1719800767 | dave1024 | 2024-07-01T02:26:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2024-07-01T02:26:07Z | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
kentyang/musicgen-melody-lora-punk-colab | kentyang | 2024-07-01T02:26:19Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T02:26:19Z | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.