modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
odedregev/Llama-2-7b-chat-hf-science-rejection-sampling | odedregev | 2024-07-01T14:51:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T14:43:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ilyass31/results | ilyass31 | 2024-07-01T16:41:15Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-07-01T14:44:27Z | ---
base_model: NousResearch/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.13.3
|
josedonoso/blip2-ecg-khan | josedonoso | 2024-07-01T14:48:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T14:48:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bitmind/deepfake-detector-base | bitmind | 2024-07-01T14:51:54Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-01T14:50:05Z | ---
license: mit
---
|
rashid996958/pix2pix_exp42 | rashid996958 | 2024-07-01T14:50:53Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T14:50:48Z | Entry not found |
HieuBeo/Ho_Chi_Minh-LoRa | HieuBeo | 2024-07-01T14:50:57Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T14:50:57Z | Entry not found |
Pyszczysko/swendamocnaboli | Pyszczysko | 2024-07-01T14:51:27Z | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | 2024-07-01T14:51:26Z | ---
license: unknown
---
|
cheng-cherry/my_awesome_opus_books_model | cheng-cherry | 2024-07-01T15:28:30Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-07-01T14:51:31Z | Entry not found |
yuchuantian/IPG_rep | yuchuantian | 2024-07-01T14:55:50Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T14:52:12Z | ---
license: apache-2.0
---
|
hmpm00/bul-id-bulas-final | hmpm00 | 2024-07-01T14:52:21Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T14:52:21Z | Entry not found |
Smabbler/Multiclass-Disease-Diagnosis-Model | Smabbler | 2024-07-01T18:43:56Z | 0 | 0 | null | [
"text-classification",
"en",
"dataset:duxprajapati/symptom-disease-dataset",
"region:us"
] | text-classification | 2024-07-01T14:53:25Z | ---
datasets:
- duxprajapati/symptom-disease-dataset
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
A predictive machine learning model was developed that can classify data points into distinct categories based on symptoms using diseases data.
- **Developed by:** Priyanka Kamila
- **Model type:** RandomForestClassifier, SVC
- **Language(s) (NLP):** EN
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model can be directly used for disease diagnosis based on binary encoded medical features. By inputting patient symptoms in the form of binary vectors, the model predicts the likely medical condition. Here’s how you can utilize the model:
Prepare Input Data:
Ensure that the input data is formatted as a binary matrix, where each row represents a patient and each column represents a symptom or feature.
The target variable should be a categorical label representing the medical condition.
Load the Model:
Load the trained Random Forest Classifier or SVM Classifier from the repository.
You can use libraries like joblib or pickle in Python to load the pre-trained model.
Make Predictions:
Use the loaded model to make predictions on new input data.
For instance, in Python:
python
Copy code
import joblib
model = joblib.load('path_to_model.pkl')
predictions = model.predict(new_input_data)
Interpret Results:
The model will output the predicted medical condition for each input row.
These predictions can be used by healthcare professionals to assist in diagnosing patients.
This model is intended for direct use in clinical decision support systems or healthcare applications where quick
and accurate disease diagnosis is critical. It can be integrated into electronic health records (EHR) systems, patient management software,
or used as a standalone diagnostic tool.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
This model is designed specifically for diagnosing diseases based on binary encoded medical features. It is important to recognize the limitations and
potential misuse of the model:
Non-Medical Applications:
The model is not suitable for non-medical applications or any use cases outside of healthcare diagnostics.
Using this model for unrelated classification tasks will yield inaccurate and irrelevant results.
Incomplete or Inaccurate Input Data:
The model relies on precise binary encoding of medical symptoms. Providing incomplete, inaccurate, or improperly formatted data can lead to incorrect diagnoses.
It is crucial to ensure that input data is complete and correctly formatted according to the binary encoding schema used during model training.
Real-Time Critical Decisions:
While the model can aid in diagnosis, it should not be solely relied upon for real-time critical medical decisions without human oversight.
Healthcare professionals should verify the model’s predictions and consider additional clinical information and diagnostics before making final decisions.
Malicious Use:
The model should not be used to intentionally misdiagnose or manipulate medical diagnoses for fraudulent purposes.
Ensuring ethical use of the model is paramount, and it should only be used to assist in improving patient care.
Diagnostic Scope Limitation:
The model is trained on specific diseases included in the dataset. It may not perform well in diagnosing conditions outside the scope of its training data.
For diseases not represented in the training data, the model might default to predicting "other," which should be interpreted with caution.
General Population Screening:
This model is not intended for general population screening or predicting disease prevalence in broad, non-clinical populations.
It is designed for use with patients already presenting symptoms or those in a clinical setting.
By understanding these limitations and potential misuse scenarios, users can ensure that the model is applied appropriately and ethically in relevant healthcare contexts.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The training data used for this model consists of a custom dataset with binary encoded medical features. Each row in the dataset represents a patient's symptoms encoded as binary values, and the corresponding label represents the diagnosed disease. The dataset includes a wide range of medical conditions, with the aim of providing a comprehensive diagnostic tool.
Source of Data:
The dataset was compiled from https://huggingface.co/datasets/duxprajapati/symptom-disease-dataset from huggingface
which was then processed in terms of data-labeling using Smabbler's QueryLab platform ensuring a accurate representation of data-labels for common and rare diseases.
Pre-processing:
Data was pre-processed to ensure consistency and accuracy. This involved cleaning the data, handling missing values, and normalizing the binary encoding.
Each symptom was converted into a binary feature (0 or 1), indicating its absence or presence respectively.
The labels were mapped to specific diseases using a detailed mapping file to ensure accurate representation.
Label Mapping:
The labels in the dataset correspond to various diseases. A mapping file (mapping.json) was used to translate encoded labels to human-readable disease names.
Top labels include diseases like Psoriasis, Malaria, Bronchial Asthma, Dengue, Arthritis, Heart Attack, and many more.
Additional Documentation:
Detailed documentation on data pre-processing and filtering steps is provided to ensure reproducibility and transparency.
The dataset card includes information on the data sources, pre-processing steps, and any additional filtering or transformations applied.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The training procedure for this model involves several key steps to ensure robust and accurate disease diagnosis using Random Forest and SVM classifiers. Below are the detailed steps and technical specifications related to the training procedure:
Data Splitting:
The dataset was split into training and testing sets using an 80-20 split ratio.
The training set was used to train the classifiers, while the testing set was used to evaluate the model’s performance.
Feature Selection:
Binary encoded features representing the presence or absence of symptoms were selected as input features.
The target variable was the disease label, which was mapped from encoded integers to human-readable disease names.
Model Initialization:
Two classifiers were initialized: Random Forest Classifier and Support Vector Machine (SVM) Classifier.
Both classifiers were initialized with default parameters and a fixed random state to ensure reproducibility.
Training the Models:
Random Forest Classifier:
The Random Forest model was trained on the training data using the fit method.
Hyperparameters such as the number of trees and depth were tuned to optimize performance.
SVM Classifier:
The SVM model was similarly trained using the fit method.
Kernel type, regularization parameters, and other hyperparameters were adjusted for optimal classification.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
The performance of both models was evaluated on the testing set.
Metrics such as accuracy, precision, recall, and f1-score were calculated to assess model performance.
Confusion matrices were generated to visualize the performance of each classifier in predicting the correct disease labels.


### Results

#### Summary
This model utilizes both Random Forest and SVM classifiers to accurately diagnose a variety of diseases based on binary encoded medical features.
The training involved data pre-processing, feature selection, model training,
and extensive evaluation to ensure reliability. Designed for healthcare applications,
it aids professionals in making informed diagnostic decisions efficiently.
## Model Card Authors
Priyanka Kamila
|
tsavage68/Summary4500_M2_1000steps_1e8rate_SFT | tsavage68 | 2024-07-01T14:59:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T14:55:11Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Summary4500_M2_1000steps_1e8rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary4500_M2_1000steps_1e8rate_SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9563 | 0.0447 | 50 | 1.9699 |
| 1.9559 | 0.0895 | 100 | 1.9696 |
| 1.9636 | 0.1342 | 150 | 1.9675 |
| 1.9608 | 0.1790 | 200 | 1.9666 |
| 1.9525 | 0.2237 | 250 | 1.9654 |
| 1.9514 | 0.2685 | 300 | 1.9645 |
| 1.9704 | 0.3132 | 350 | 1.9644 |
| 1.9596 | 0.3579 | 400 | 1.9639 |
| 1.9558 | 0.4027 | 450 | 1.9641 |
| 1.9481 | 0.4474 | 500 | 1.9635 |
| 1.945 | 0.4922 | 550 | 1.9639 |
| 1.9532 | 0.5369 | 600 | 1.9634 |
| 1.955 | 0.5817 | 650 | 1.9642 |
| 1.9589 | 0.6264 | 700 | 1.9635 |
| 1.9638 | 0.6711 | 750 | 1.9632 |
| 1.9679 | 0.7159 | 800 | 1.9634 |
| 1.9484 | 0.7606 | 850 | 1.9634 |
| 1.9593 | 0.8054 | 900 | 1.9634 |
| 1.9598 | 0.8501 | 950 | 1.9634 |
| 1.9584 | 0.8949 | 1000 | 1.9634 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
MichaelBui/Collection | MichaelBui | 2024-07-02T08:15:05Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T14:56:09Z | Entry not found |
sekeun/EchoFM | sekeun | 2024-07-01T14:56:35Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T14:56:35Z | Entry not found |
davelotito/donut_experiment_bayesian_trial_17 | davelotito | 2024-07-01T16:02:31Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T14:57:18Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
metrics:
- bleu
- wer
model-index:
- name: donut_experiment_bayesian_trial_17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_experiment_bayesian_trial_17
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4635
- Bleu: 0.0675
- Precisions: [0.8301886792452831, 0.7738095238095238, 0.7272727272727273, 0.6895424836601307]
- Brevity Penalty: 0.0895
- Length Ratio: 0.2930
- Translation Length: 477
- Reference Length: 1628
- Cer: 0.7603
- Wer: 0.8297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00018015728878154226
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
| 0.8044 | 1.0 | 253 | 0.7112 | 0.0610 | [0.7535641547861507, 0.6497695852534562, 0.5809018567639257, 0.5125] | 0.0987 | 0.3016 | 491 | 1628 | 0.7647 | 0.8548 |
| 0.3513 | 2.0 | 506 | 0.5640 | 0.0632 | [0.7908902691511387, 0.7089201877934272, 0.6449864498644986, 0.5801282051282052] | 0.0934 | 0.2967 | 483 | 1628 | 0.7549 | 0.8416 |
| 0.2101 | 3.0 | 759 | 0.4754 | 0.0666 | [0.8198757763975155, 0.744131455399061, 0.6802168021680217, 0.6217948717948718] | 0.0934 | 0.2967 | 483 | 1628 | 0.7508 | 0.8282 |
| 0.0756 | 4.0 | 1012 | 0.4635 | 0.0675 | [0.8301886792452831, 0.7738095238095238, 0.7272727272727273, 0.6895424836601307] | 0.0895 | 0.2930 | 477 | 1628 | 0.7603 | 0.8297 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0
- Datasets 2.18.0
- Tokenizers 0.19.1
|
habulaj/152328129573 | habulaj | 2024-07-01T14:57:47Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T14:57:44Z | Entry not found |
RymHrizi/lora_Llema38bsideeffect | RymHrizi | 2024-07-01T16:19:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T14:57:45Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** RymHrizi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vincent-espitalier/candle-beit | vincent-espitalier | 2024-07-01T22:23:50Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-07-01T14:58:11Z | ---
license: cc-by-nc-4.0
---
This repo contains the pre-trained weights for the [Beit model](https://github.com/microsoft/unilm/tree/master/beit)
converted in a format that can be used by [candle](https://github.com/huggingface/candle).
## Citing DINOv2
As per their [GitHub repository](https://github.com/microsoft/unilm/tree/master/beit):
```
@misc{bao2022beitbertpretrainingimage,
title={BEiT: BERT Pre-Training of Image Transformers},
author={Hangbo Bao and Li Dong and Songhao Piao and Furu Wei},
year={2022},
}
```
|
bobtk/mlx-communityLlama-3-Swallow-8B-Instruct-v0.1-8bit | bobtk | 2024-07-01T14:58:20Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T14:58:20Z | Entry not found |
sail/data-mixture-doremi-1b | sail | 2024-07-01T14:58:55Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-01T14:58:55Z | ---
license: mit
---
|
sail/data-mixture-regmix-1b | sail | 2024-07-01T14:59:06Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T14:59:06Z | Entry not found |
sail/data-mixture-human-1b | sail | 2024-07-01T14:59:22Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T14:59:22Z | Entry not found |
sail/data-mixture-pile-cc-1b | sail | 2024-07-01T14:59:35Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T14:59:35Z | Entry not found |
mlx-community/Llama-3-Swallow-8B-Instruct-v0.1-8bit | mlx-community | 2024-07-01T15:10:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"en",
"ja",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T15:00:05Z | ---
language:
- en
- ja
license: llama3
library_name: transformers
tags:
- mlx
pipeline_tag: text-generation
model_type: llama
---
# mlx-community/Llama-3-Swallow-8B-Instruct-v0.1-8bit
The Model [mlx-community/Llama-3-Swallow-8B-Instruct-v0.1-8bit](https://huggingface.co/mlx-community/Llama-3-Swallow-8B-Instruct-v0.1-8bit) was converted to MLX format from [tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1) using mlx-lm version **0.13.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Llama-3-Swallow-8B-Instruct-v0.1-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
veela4/ELDEN-RING-MOD | veela4 | 2024-07-02T07:21:12Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:00:12Z | Entry not found |
shine1607/masked_language_model | shine1607 | 2024-07-01T15:00:35Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:00:35Z | Entry not found |
ethedeltae/llama3-8b-oig-unsloth-iitg-final | ethedeltae | 2024-07-01T15:01:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T15:01:01Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ethedeltae
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yuchuantian/Instruct-IPT-single | yuchuantian | 2024-07-01T15:06:59Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T15:01:39Z | ---
license: apache-2.0
---
|
yuchuantian/IPG_Tiny | yuchuantian | 2024-07-01T15:19:18Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T15:03:22Z | ---
license: apache-2.0
---
|
sfgefgetg/tytu | sfgefgetg | 2024-07-01T15:03:51Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:03:51Z | Entry not found |
rashid996958/pix2pix_exp43 | rashid996958 | 2024-07-01T15:06:09Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:06:04Z | Entry not found |
tsavage68/Summary4500_M2_1000steps_1e7rate_SFT | tsavage68 | 2024-07-01T15:12:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T15:06:39Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Summary4500_M2_1000steps_1e7rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary4500_M2_1000steps_1e7rate_SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8849 | 0.0447 | 50 | 1.8726 |
| 1.4871 | 0.0895 | 100 | 1.4453 |
| 0.8608 | 0.1342 | 150 | 0.7955 |
| 0.4432 | 0.1790 | 200 | 0.4648 |
| 0.4269 | 0.2237 | 250 | 0.4556 |
| 0.424 | 0.2685 | 300 | 0.4519 |
| 0.4417 | 0.3132 | 350 | 0.4497 |
| 0.4253 | 0.3579 | 400 | 0.4481 |
| 0.4247 | 0.4027 | 450 | 0.4470 |
| 0.4152 | 0.4474 | 500 | 0.4461 |
| 0.4116 | 0.4922 | 550 | 0.4453 |
| 0.4174 | 0.5369 | 600 | 0.4448 |
| 0.4201 | 0.5817 | 650 | 0.4446 |
| 0.423 | 0.6264 | 700 | 0.4444 |
| 0.4243 | 0.6711 | 750 | 0.4441 |
| 0.4325 | 0.7159 | 800 | 0.4442 |
| 0.4128 | 0.7606 | 850 | 0.4441 |
| 0.4207 | 0.8054 | 900 | 0.4441 |
| 0.424 | 0.8501 | 950 | 0.4442 |
| 0.4219 | 0.8949 | 1000 | 0.4442 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Aiden163/TiaTeste01 | Aiden163 | 2024-07-01T15:08:41Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:08:41Z | Entry not found |
itay-nakash/model_387dff9370_sweep_expert-oath-1165 | itay-nakash | 2024-07-01T15:08:58Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:08:58Z | Entry not found |
net31/naschainv145 | net31 | 2024-07-02T21:23:55Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:10:43Z | Entry not found |
itay-nakash/model_387dff9370_sweep_drawn-butterfly-1166 | itay-nakash | 2024-07-01T15:11:10Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:11:10Z | Entry not found |
gkngm/llama-financial-sentiment-analysis-peft | gkngm | 2024-07-01T15:12:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T15:12:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gkngm/finllm-financial-sentiment-analysis-peft | gkngm | 2024-07-01T15:12:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T15:12:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Behshadsheikhi/Morteza | Behshadsheikhi | 2024-07-01T15:18:08Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-01T15:14:52Z | ---
license: openrail
---
|
habulaj/66719121840 | habulaj | 2024-07-01T15:17:16Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:17:14Z | Entry not found |
Arjuna17/results | Arjuna17 | 2024-07-01T15:19:05Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:19:05Z | Entry not found |
LuluXML/lora_model | LuluXML | 2024-07-01T15:19:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T15:19:22Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** LuluXML
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jaimeazevedo/HQs | jaimeazevedo | 2024-07-01T15:19:42Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-01T15:19:42Z | ---
license: mit
---
|
nikest/nps-ft | nikest | 2024-07-01T15:34:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T15:23:42Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
necrobradley/face_predict_emotion | necrobradley | 2024-07-01T15:27:29Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:27:29Z | Entry not found |
luisrguerra/test | luisrguerra | 2024-07-01T15:29:36Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:29:36Z | Entry not found |
sanamosuk93/news_sum | sanamosuk93 | 2024-07-01T15:41:59Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-07-01T15:32:55Z | Entry not found |
ermannocavalli/face_of_FedericaFedeSala | ermannocavalli | 2024-07-01T15:34:09Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:34:09Z | Entry not found |
EthanRhys/Greta-Masters-EX | EthanRhys | 2024-07-01T15:34:52Z | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | 2024-07-01T15:34:20Z | ---
license: openrail++
---
|
camillop/phi-mini-company-classification-adapters | camillop | 2024-07-01T15:51:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T15:34:45Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** camillop
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
messawey/historyqa_model | messawey | 2024-07-01T15:35:17Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:35:17Z | Entry not found |
js-kim/llama2-qlora-finetuned-french | js-kim | 2024-07-01T15:35:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T15:35:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sushanthr/tinyLlama-1.1B-Chat-v1.0-fp16-webnn | sushanthr | 2024-07-01T15:45:13Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:37:57Z | A special port of https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0, in FP16 so that weights are loadable with WebNN.
See
https://sushanthr.github.io/RapidChat/
https://github.com/sushanthr/RapidChat |
bartowski/Qwen2-7B-Multilingual-RP-exl2 | bartowski | 2024-07-01T15:38:07Z | 0 | 0 | null | [
"text-generation",
"en",
"ko",
"ja",
"zh",
"es",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-07-01T15:38:06Z | ---
license: apache-2.0
language:
- en
- ko
- ja
- zh
- es
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Qwen2-7B-Multilingual-RP
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.1.6">turboderp's ExLlamaV2 v0.1.6</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Available sizes
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Qwen2-7B-Multilingual-RP-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Qwen2-7B-Multilingual-RP-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Qwen2-7B-Multilingual-RP-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/Qwen2-7B-Multilingual-RP-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/Qwen2-7B-Multilingual-RP-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Qwen2-7B-Multilingual-RP-exl2 Qwen2-7B-Multilingual-RP-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download bartowski/Qwen2-7B-Multilingual-RP-exl2 --revision 6_5 --local-dir Qwen2-7B-Multilingual-RP-exl2-6_5
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download bartowski/Qwen2-7B-Multilingual-RP-exl2 --revision 6_5 --local-dir Qwen2-7B-Multilingual-RP-exl2-6.5
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
nftwp/maxart | nftwp | 2024-07-02T15:33:19Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:42:49Z | Entry not found |
Simple979/ElmoBaby | Simple979 | 2024-07-01T17:13:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T15:43:36Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Simple979
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jaxmetaverse/clay | jaxmetaverse | 2024-07-01T15:44:42Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:44:42Z | Entry not found |
HikariLight/Mistral-SUFT-RL | HikariLight | 2024-07-01T15:45:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T15:45:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tsavage68/Summary4500_M2_200steps_1e7rate_SFT | tsavage68 | 2024-07-01T16:19:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T15:45:26Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Summary4500_M2_200steps_1e7rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary4500_M2_200steps_1e7rate_SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8849 | 0.0447 | 50 | 1.8726 |
| 1.4871 | 0.0895 | 100 | 1.4453 |
| 1.0265 | 0.1342 | 150 | 1.0225 |
| 0.9518 | 0.1790 | 200 | 0.9787 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
fokyoum9/test_model | fokyoum9 | 2024-07-01T15:50:51Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:50:51Z | Entry not found |
skyconnectiva/sky | skyconnectiva | 2024-07-01T15:50:58Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-01T15:50:58Z | ---
license: mit
---
|
shuyuej/MedLLaMA3-70B-base-INT4-G2048-GPTQ | shuyuej | 2024-07-01T20:05:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T15:56:46Z | ---
license: apache-2.0
---
|
KeroroK66/Yoruichi | KeroroK66 | 2024-07-01T15:57:09Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-01T15:56:47Z | ---
license: openrail
---
|
LinxuanPastel/parapparappa | LinxuanPastel | 2024-07-01T16:27:14Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T15:56:49Z | Entry not found |
chunyeow/gemma-Code-Instruct-Finetune-test | chunyeow | 2024-07-01T16:03:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T15:56:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leduyson2603/autotrain-2dpe1-u5jfx | leduyson2603 | 2024-07-01T16:02:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"autotrain",
"base_model:google-bert/bert-base-german-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-01T15:58:39Z |
---
tags:
- autotrain
- token-classification
base_model: google-bert/bert-base-german-cased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Token Classification
## Validation Metrics
loss: 2.344400405883789
precision: 0.09302325581395349
recall: 0.25
f1: 0.13559322033898305
accuracy: 0.5533980582524272
|
tsavage68/Summary4500_M2_400steps_1e8rate_SFT | tsavage68 | 2024-07-01T16:06:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T16:02:10Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Summary4500_M2_400steps_1e8rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary4500_M2_400steps_1e8rate_SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9563 | 0.0447 | 50 | 1.9699 |
| 1.9559 | 0.0895 | 100 | 1.9696 |
| 1.9642 | 0.1342 | 150 | 1.9686 |
| 1.9621 | 0.1790 | 200 | 1.9673 |
| 1.9548 | 0.2237 | 250 | 1.9676 |
| 1.9541 | 0.2685 | 300 | 1.9678 |
| 1.9743 | 0.3132 | 350 | 1.9675 |
| 1.964 | 0.3579 | 400 | 1.9675 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
davelotito/donut_experiment_bayesian_trial_18 | davelotito | 2024-07-01T16:36:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T16:02:32Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
metrics:
- bleu
- wer
model-index:
- name: donut_experiment_bayesian_trial_18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_experiment_bayesian_trial_18
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5643
- Bleu: 0.0698
- Precisions: [0.8340248962655602, 0.7741176470588236, 0.7309782608695652, 0.6784565916398714]
- Brevity Penalty: 0.0928
- Length Ratio: 0.2961
- Translation Length: 482
- Reference Length: 1628
- Cer: 0.7496
- Wer: 0.8244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.7803961202565393e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
| 0.0287 | 1.0 | 253 | 0.5097 | 0.0722 | [0.8374485596707819, 0.7762237762237763, 0.7338709677419355, 0.6888888888888889] | 0.0954 | 0.2985 | 486 | 1628 | 0.7506 | 0.8208 |
| 0.0159 | 2.0 | 506 | 0.5583 | 0.0697 | [0.8319502074688797, 0.7741176470588236, 0.7282608695652174, 0.6784565916398714] | 0.0928 | 0.2961 | 482 | 1628 | 0.7496 | 0.8232 |
| 0.0118 | 3.0 | 759 | 0.5643 | 0.0698 | [0.8340248962655602, 0.7741176470588236, 0.7309782608695652, 0.6784565916398714] | 0.0928 | 0.2961 | 482 | 1628 | 0.7496 | 0.8244 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0
- Datasets 2.18.0
- Tokenizers 0.19.1
|
cassador/4bs8lr2 | cassador | 2024-07-01T16:03:03Z | 0 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6915",
"loss:SoftmaxLoss",
"id",
"dataset:afaji/indonli",
"arxiv:1908.10084",
"base_model:indobenchmark/indobert-base-p2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2024-07-01T16:02:38Z | ---
base_model: indobenchmark/indobert-base-p2
datasets:
- afaji/indonli
language:
- id
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6915
- loss:SoftmaxLoss
widget:
- source_sentence: Pesta Olahraga Asia Tenggara atau Southeast Asian Games, biasa
disingkat SEA Games, adalah ajang olahraga yang diadakan setiap dua tahun dan
melibatkan 11 negara Asia Tenggara.
sentences:
- Sekarang tahun 2017.
- Warna kulit tidak mempengaruhi waktu berjemur yang baik untuk mengatifkan pro-vitamin
D3.
- Pesta Olahraga Asia Tenggara diadakan setiap tahun.
- source_sentence: Menjalani aktivitas Ramadhan di tengah wabah Corona tentunya tidak
mudah.
sentences:
- Tidak ada observasi yang pernah dilansir oleh Business Insider.
- Wabah Corona membuat aktivitas Ramadhan tidak mudah dijalani.
- Piala Sudirman pertama digelar pada tahun 1989.
- source_sentence: Dalam bidang politik, partai ini memperjuangkan agar kekuasaan
sepenuhnya berada di tangan rakyat.
sentences:
- Galileo tidak berhasil mengetes hasil dari Hukum Inert.
- Kudeta 14 Februari 1946 gagal merebut kekuasaan Belanda.
- Partai ini berusaha agar kekuasaan sepenuhnya berada di tangan rakyat.
- source_sentence: Keluarga mendiang Prince menuduh layanan musik streaming Tidal
memasukkan karya milik sang penyanyi legendaris tanpa izin .
sentences:
- Rosier adalah pelayan setia Lord Voldemort.
- Bangunan ini digunakan untuk penjualan.
- Keluarga mendiang Prince sudah memberi izin kepada TImbal untuk menggunakan lagu
milik Prince.
- source_sentence: Tujuan dari acara dengar pendapat CRTC adalah untuk mengumpulkan
respons dari pada pemangku kepentingan industri ini dan dari masyarakat umum.
sentences:
- Pembuat Rooms hanya bisa membuat meeting yang terbuka.
- Masyarakat umum dilibatkan untuk memberikan respon dalam acara dengar pendapat
CRTC.
- Eminem dirasa tidak akan memulai kembali kariernya tahun ini.
model-index:
- name: SentenceTransformer based on indobenchmark/indobert-base-p2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.596170613538296
name: Pearson Cosine
- type: spearman_cosine
value: 0.5861883707539226
name: Spearman Cosine
- type: pearson_manhattan
value: 0.5845731839861422
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.5782563614870986
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.5900038609486801
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.5795936352515776
name: Spearman Euclidean
- type: pearson_dot
value: 0.5995818925993402
name: Pearson Dot
- type: spearman_dot
value: 0.5930379614276564
name: Spearman Dot
- type: pearson_max
value: 0.5995818925993402
name: Pearson Max
- type: spearman_max
value: 0.5930379614276564
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.32544389544371366
name: Pearson Cosine
- type: spearman_cosine
value: 0.29994363722612716
name: Spearman Cosine
- type: pearson_manhattan
value: 0.2875495017479062
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.2810442265188576
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.29788552102363436
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.28248957351462056
name: Spearman Euclidean
- type: pearson_dot
value: 0.34645175745533086
name: Pearson Dot
- type: spearman_dot
value: 0.3331449893649715
name: Spearman Dot
- type: pearson_max
value: 0.34645175745533086
name: Pearson Max
- type: spearman_max
value: 0.3331449893649715
name: Spearman Max
---
# SentenceTransformer based on indobenchmark/indobert-base-p2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on the [afaji/indonli](https://huggingface.co/datasets/afaji/indonli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) <!-- at revision 94b4e0a82081fa57f227fcc2024d1ea89b57ac1f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [afaji/indonli](https://huggingface.co/datasets/afaji/indonli)
- **Language:** id
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("cassador/4bs8lr2")
# Run inference
sentences = [
'Tujuan dari acara dengar pendapat CRTC adalah untuk mengumpulkan respons dari pada pemangku kepentingan industri ini dan dari masyarakat umum.',
'Masyarakat umum dilibatkan untuk memberikan respon dalam acara dengar pendapat CRTC.',
'Pembuat Rooms hanya bisa membuat meeting yang terbuka.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.5962 |
| **spearman_cosine** | **0.5862** |
| pearson_manhattan | 0.5846 |
| spearman_manhattan | 0.5783 |
| pearson_euclidean | 0.59 |
| spearman_euclidean | 0.5796 |
| pearson_dot | 0.5996 |
| spearman_dot | 0.593 |
| pearson_max | 0.5996 |
| spearman_max | 0.593 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.3254 |
| **spearman_cosine** | **0.2999** |
| pearson_manhattan | 0.2875 |
| spearman_manhattan | 0.281 |
| pearson_euclidean | 0.2979 |
| spearman_euclidean | 0.2825 |
| pearson_dot | 0.3465 |
| spearman_dot | 0.3331 |
| pearson_max | 0.3465 |
| spearman_max | 0.3331 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### afaji/indonli
* Dataset: [afaji/indonli](https://huggingface.co/datasets/afaji/indonli)
* Size: 6,915 training samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 12 tokens</li><li>mean: 29.26 tokens</li><li>max: 135 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.13 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>0: ~51.00%</li><li>1: ~49.00%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------|:---------------|
| <code>Presiden Joko Widodo (Jokowi) menyampaikan prediksi bahwa wabah virus Corona (COVID-19) di Indonesia akan selesai akhir tahun ini.</code> | <code>Prediksi akhir wabah tidak disampaikan Jokowi.</code> | <code>0</code> |
| <code>Meski biasanya hanya digunakan di fasilitas kesehatan, saat ini masker dan sarung tangan sekali pakai banyak dipakai di tingkat rumah tangga.</code> | <code>Masker sekali pakai banyak dipakai di tingkat rumah tangga.</code> | <code>1</code> |
| <code>Seperti namanya, paket internet sahur Telkomsel ini ditujukan bagi pengguna yang menginginkan kuota ekstra, untuk menemani momen sahur sepanjang bulan puasa.</code> | <code>Paket internet sahur tidak ditujukan untuk saat sahur.</code> | <code>0</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Evaluation Dataset
#### afaji/indonli
* Dataset: [afaji/indonli](https://huggingface.co/datasets/afaji/indonli)
* Size: 1,556 evaluation samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 9 tokens</li><li>mean: 28.07 tokens</li><li>max: 179 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.15 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>0: ~47.90%</li><li>1: ~52.10%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|:---------------|
| <code>Manuskrip tersebut berisi tiga catatan yang menceritakan bagaimana peristiwa jatuhnya meteorit serta laporan kematian akibat kejadian tersebut seperti dilansir dari Science Alert, Sabtu (25/4/2020).</code> | <code>Manuskrip tersebut tidak mencatat laporan kematian.</code> | <code>0</code> |
| <code>Dilansir dari Business Insider, menurut observasi dari Mauna Loa Observatory di Hawaii pada karbon dioksida (CO2) di level mencapai 410 ppm tidak langsung memberikan efek pada pernapasan, karena tubuh manusia juga masih membutuhkan CO2 dalam kadar tertentu.</code> | <code>Tidak ada observasi yang pernah dilansir oleh Business Insider.</code> | <code>0</code> |
| <code>Seorang wanita asal New York mengaku sangat benci air putih.</code> | <code>Tidak ada orang dari New York yang membenci air putih.</code> | <code>0</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:------:|:----:|:-------------:|:------:|:-----------------------:|:------------------------:|
| 0 | 0 | - | - | 0.1277 | - |
| 0.1156 | 100 | 0.6805 | - | - | - |
| 0.2312 | 200 | 0.5137 | - | - | - |
| 0.3468 | 300 | 0.5108 | - | - | - |
| 0.4624 | 400 | 0.5113 | - | - | - |
| 0.5780 | 500 | 0.5102 | - | - | - |
| 0.6936 | 600 | 0.5212 | - | - | - |
| 0.8092 | 700 | 0.5035 | - | - | - |
| 0.9249 | 800 | 0.472 | - | - | - |
| 1.0 | 865 | - | 0.4468 | 0.5249 | - |
| 1.0405 | 900 | 0.4193 | - | - | - |
| 1.1561 | 1000 | 0.3509 | - | - | - |
| 1.2717 | 1100 | 0.3709 | - | - | - |
| 1.3873 | 1200 | 0.3538 | - | - | - |
| 1.5029 | 1300 | 0.3619 | - | - | - |
| 1.6185 | 1400 | 0.388 | - | - | - |
| 1.7341 | 1500 | 0.3657 | - | - | - |
| 1.8497 | 1600 | 0.3577 | - | - | - |
| 1.9653 | 1700 | 0.4149 | - | - | - |
| 2.0 | 1730 | - | 0.4535 | 0.5503 | - |
| 2.0809 | 1800 | 0.3037 | - | - | - |
| 2.1965 | 1900 | 0.2213 | - | - | - |
| 2.3121 | 2000 | 0.2531 | - | - | - |
| 2.4277 | 2100 | 0.2281 | - | - | - |
| 2.5434 | 2200 | 0.2684 | - | - | - |
| 2.6590 | 2300 | 0.2154 | - | - | - |
| 2.7746 | 2400 | 0.2556 | - | - | - |
| 2.8902 | 2500 | 0.2515 | - | - | - |
| 3.0 | 2595 | - | 0.6295 | 0.5799 | - |
| 3.0058 | 2600 | 0.2158 | - | - | - |
| 3.1214 | 2700 | 0.1445 | - | - | - |
| 3.2370 | 2800 | 0.1191 | - | - | - |
| 3.3526 | 2900 | 0.1514 | - | - | - |
| 3.4682 | 3000 | 0.1223 | - | - | - |
| 3.5838 | 3100 | 0.1581 | - | - | - |
| 3.6994 | 3200 | 0.112 | - | - | - |
| 3.8150 | 3300 | 0.1396 | - | - | - |
| 3.9306 | 3400 | 0.1568 | - | - | - |
| 4.0 | 3460 | - | 0.8635 | 0.5862 | 0.2999 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
PointPresse/phi3-mini-news-analysis-fr-lora | PointPresse | 2024-07-01T16:07:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T16:04:24Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** PointPresse
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Iqbaliswinning/results | Iqbaliswinning | 2024-07-01T16:07:42Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-07-01T16:07:25Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
rinogrego/biomedlm-2.7b-finetuned-medmcqa | rinogrego | 2024-07-03T01:28:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T16:08:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xiangruowen/ruoxiang_test | xiangruowen | 2024-07-01T16:22:00Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-01T16:09:46Z | ---
license: mit
---
|
bvrc1518/vosk | bvrc1518 | 2024-07-01T16:09:49Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-01T16:09:49Z | ---
license: mit
---
|
lucyknada/amxl-reupload | lucyknada | 2024-07-01T16:21:36Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T16:20:00Z | Entry not found |
cmmann/q-FrozenLake-v1-4x4-noSlippery | cmmann | 2024-07-01T16:23:32Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-07-01T16:23:30Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="cmmann/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
samSepiol101/newRepo | samSepiol101 | 2024-07-01T17:02:21Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T16:24:17Z | Entry not found |
cmmann/q-taxi | cmmann | 2024-07-01T16:42:21Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-07-01T16:27:13Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: -35.46 +/- 52.99
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cmmann/q-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sunilswain/llama2-7b-chat-EssplTravelPolicy3.7k-epoch6 | sunilswain | 2024-07-01T16:39:39Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T16:30:39Z | Entry not found |
habulaj/142653118795 | habulaj | 2024-07-01T16:35:18Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T16:35:06Z | Entry not found |
allison1221/t5-small-finetuned-xsum | allison1221 | 2024-07-02T01:44:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-07-01T16:35:12Z | Entry not found |
davelotito/donut_experiment_bayesian_trial_19 | davelotito | 2024-07-01T17:11:12Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T16:36:53Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
metrics:
- bleu
- wer
model-index:
- name: donut_experiment_bayesian_trial_19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_experiment_bayesian_trial_19
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5754
- Bleu: 0.0724
- Precisions: [0.8450413223140496, 0.7892271662763466, 0.7486486486486487, 0.7028753993610224]
- Brevity Penalty: 0.0941
- Length Ratio: 0.2973
- Translation Length: 484
- Reference Length: 1628
- Cer: 0.7493
- Wer: 0.8177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.0668629620167924e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
| 0.0069 | 1.0 | 253 | 0.5825 | 0.0710 | [0.8423236514522822, 0.7858823529411765, 0.7418478260869565, 0.6977491961414791] | 0.0928 | 0.2961 | 482 | 1628 | 0.7509 | 0.8197 |
| 0.0113 | 2.0 | 506 | 0.5684 | 0.0703 | [0.841995841995842, 0.785377358490566, 0.7411444141689373, 0.6935483870967742] | 0.0921 | 0.2955 | 481 | 1628 | 0.7505 | 0.8199 |
| 0.0074 | 3.0 | 759 | 0.5754 | 0.0724 | [0.8450413223140496, 0.7892271662763466, 0.7486486486486487, 0.7028753993610224] | 0.0941 | 0.2973 | 484 | 1628 | 0.7493 | 0.8177 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0
- Datasets 2.18.0
- Tokenizers 0.19.1
|
lielbin/BabyBERTa-wikipedia-french-without-Masking-finetuned-Fr-SQuAD | lielbin | 2024-07-01T17:17:43Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-07-01T16:38:48Z | ---
tags:
- generated_from_trainer
model-index:
- name: BabyBERTa-wikipedia-french-without-Masking-finetuned-Fr-SQuAD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BabyBERTa-wikipedia-french-without-Masking-finetuned-Fr-SQuAD
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
BTX24/deit_birads_classifier | BTX24 | 2024-07-01T16:38:52Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T16:38:52Z | Entry not found |
VoxAI/Hermes-2-Theta-Llama-3-8B-DriveThru-ORPO-v1-master-0.707-adapter | VoxAI | 2024-07-01T16:44:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T16:40:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vincentantu/computer_vision | vincentantu | 2024-07-01T16:41:23Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T16:41:23Z | Entry not found |
veronica08041991/naschainv249 | veronica08041991 | 2024-07-02T03:18:30Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T16:41:39Z | Entry not found |
hishamcse/DQN-MsPacman-v4 | hishamcse | 2024-07-01T16:43:57Z | 0 | 0 | null | [
"MsPacman-v4",
"dqn",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-07-01T16:41:40Z | ---
tags:
- MsPacman-v4
- dqn
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: DQN-MsPacman-v4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacman-v4
type: MsPacman-v4
metrics:
- type: mean_reward
value: 249.00 +/- 129.26
name: mean_reward
verified: false
---
# **DQN** Agent playing **MsPacman-v4**
Details see: https://www.kaggle.com/code/syedjarullahhisham/drl-huggingface-extra-unit-3-mspacmandqn-scratch
|
manbeast3b/ZZZZZZZZdriver121c | manbeast3b | 2024-07-01T16:45:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T16:42:47Z | Entry not found |
ra9hu/ra9hu | ra9hu | 2024-07-01T16:44:24Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T16:44:24Z | Entry not found |
gjonesQ02/StatementOfWork_Generator_Omega_BS_512_2 | gjonesQ02 | 2024-07-01T20:05:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T16:45:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: distilgpt2
model-index:
- name: StatementOfWork_Generator_Omega_BS_512_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# StatementOfWork_Generator_Omega_BS_512_2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 0.9839 |
| No log | 2.0 | 8 | 0.9786 |
| No log | 3.0 | 12 | 0.9767 |
| No log | 4.0 | 16 | 0.9757 |
| No log | 5.0 | 20 | 0.9716 |
| No log | 6.0 | 24 | 0.9670 |
| No log | 7.0 | 28 | 0.9663 |
| No log | 8.0 | 32 | 0.9627 |
| No log | 9.0 | 36 | 0.9571 |
| No log | 10.0 | 40 | 0.9573 |
| No log | 11.0 | 44 | 0.9520 |
| No log | 12.0 | 48 | 0.9511 |
| No log | 13.0 | 52 | 0.9486 |
| No log | 14.0 | 56 | 0.9425 |
| No log | 15.0 | 60 | 0.9440 |
| No log | 16.0 | 64 | 0.9392 |
| No log | 17.0 | 68 | 0.9357 |
| No log | 18.0 | 72 | 0.9368 |
| No log | 19.0 | 76 | 0.9333 |
| No log | 20.0 | 80 | 0.9284 |
| No log | 21.0 | 84 | 0.9260 |
| No log | 22.0 | 88 | 0.9244 |
| No log | 23.0 | 92 | 0.9228 |
| No log | 24.0 | 96 | 0.9192 |
| No log | 25.0 | 100 | 0.9163 |
| No log | 26.0 | 104 | 0.9164 |
| No log | 27.0 | 108 | 0.9135 |
| No log | 28.0 | 112 | 0.9107 |
| No log | 29.0 | 116 | 0.9105 |
| No log | 30.0 | 120 | 0.9068 |
| No log | 31.0 | 124 | 0.9050 |
| No log | 32.0 | 128 | 0.9034 |
| No log | 33.0 | 132 | 0.9012 |
| No log | 34.0 | 136 | 0.8966 |
| No log | 35.0 | 140 | 0.8968 |
| No log | 36.0 | 144 | 0.8953 |
| No log | 37.0 | 148 | 0.8920 |
| No log | 38.0 | 152 | 0.8920 |
| No log | 39.0 | 156 | 0.8912 |
| No log | 40.0 | 160 | 0.8877 |
| No log | 41.0 | 164 | 0.8871 |
| No log | 42.0 | 168 | 0.8857 |
| No log | 43.0 | 172 | 0.8800 |
| No log | 44.0 | 176 | 0.8789 |
| No log | 45.0 | 180 | 0.8831 |
| No log | 46.0 | 184 | 0.8794 |
| No log | 47.0 | 188 | 0.8757 |
| No log | 48.0 | 192 | 0.8760 |
| No log | 49.0 | 196 | 0.8730 |
| No log | 50.0 | 200 | 0.8726 |
| No log | 51.0 | 204 | 0.8719 |
| No log | 52.0 | 208 | 0.8689 |
| No log | 53.0 | 212 | 0.8691 |
| No log | 54.0 | 216 | 0.8679 |
| No log | 55.0 | 220 | 0.8633 |
| No log | 56.0 | 224 | 0.8623 |
| No log | 57.0 | 228 | 0.8624 |
| No log | 58.0 | 232 | 0.8610 |
| No log | 59.0 | 236 | 0.8601 |
| No log | 60.0 | 240 | 0.8586 |
| No log | 61.0 | 244 | 0.8583 |
| No log | 62.0 | 248 | 0.8564 |
| No log | 63.0 | 252 | 0.8552 |
| No log | 64.0 | 256 | 0.8545 |
| No log | 65.0 | 260 | 0.8526 |
| No log | 66.0 | 264 | 0.8513 |
| No log | 67.0 | 268 | 0.8508 |
| No log | 68.0 | 272 | 0.8501 |
| No log | 69.0 | 276 | 0.8484 |
| No log | 70.0 | 280 | 0.8479 |
| No log | 71.0 | 284 | 0.8465 |
| No log | 72.0 | 288 | 0.8464 |
| No log | 73.0 | 292 | 0.8452 |
| No log | 74.0 | 296 | 0.8442 |
| No log | 75.0 | 300 | 0.8443 |
| No log | 76.0 | 304 | 0.8425 |
| No log | 77.0 | 308 | 0.8410 |
| No log | 78.0 | 312 | 0.8402 |
| No log | 79.0 | 316 | 0.8394 |
| No log | 80.0 | 320 | 0.8385 |
| No log | 81.0 | 324 | 0.8380 |
| No log | 82.0 | 328 | 0.8380 |
| No log | 83.0 | 332 | 0.8369 |
| No log | 84.0 | 336 | 0.8356 |
| No log | 85.0 | 340 | 0.8351 |
| No log | 86.0 | 344 | 0.8343 |
| No log | 87.0 | 348 | 0.8326 |
| No log | 88.0 | 352 | 0.8331 |
| No log | 89.0 | 356 | 0.8328 |
| No log | 90.0 | 360 | 0.8306 |
| No log | 91.0 | 364 | 0.8310 |
| No log | 92.0 | 368 | 0.8314 |
| No log | 93.0 | 372 | 0.8295 |
| No log | 94.0 | 376 | 0.8287 |
| No log | 95.0 | 380 | 0.8286 |
| No log | 96.0 | 384 | 0.8276 |
| No log | 97.0 | 388 | 0.8270 |
| No log | 98.0 | 392 | 0.8262 |
| No log | 99.0 | 396 | 0.8251 |
| No log | 100.0 | 400 | 0.8241 |
| No log | 101.0 | 404 | 0.8231 |
| No log | 102.0 | 408 | 0.8225 |
| No log | 103.0 | 412 | 0.8235 |
| No log | 104.0 | 416 | 0.8234 |
| No log | 105.0 | 420 | 0.8225 |
| No log | 106.0 | 424 | 0.8219 |
| No log | 107.0 | 428 | 0.8209 |
| No log | 108.0 | 432 | 0.8204 |
| No log | 109.0 | 436 | 0.8195 |
| No log | 110.0 | 440 | 0.8191 |
| No log | 111.0 | 444 | 0.8191 |
| No log | 112.0 | 448 | 0.8193 |
| No log | 113.0 | 452 | 0.8197 |
| No log | 114.0 | 456 | 0.8191 |
| No log | 115.0 | 460 | 0.8179 |
| No log | 116.0 | 464 | 0.8176 |
| No log | 117.0 | 468 | 0.8173 |
| No log | 118.0 | 472 | 0.8172 |
| No log | 119.0 | 476 | 0.8174 |
| No log | 120.0 | 480 | 0.8171 |
| No log | 121.0 | 484 | 0.8169 |
| No log | 122.0 | 488 | 0.8168 |
| No log | 123.0 | 492 | 0.8162 |
| No log | 124.0 | 496 | 0.8161 |
| 0.3706 | 125.0 | 500 | 0.8160 |
| 0.3706 | 126.0 | 504 | 0.8156 |
| 0.3706 | 127.0 | 508 | 0.8145 |
| 0.3706 | 128.0 | 512 | 0.8143 |
| 0.3706 | 129.0 | 516 | 0.8143 |
| 0.3706 | 130.0 | 520 | 0.8145 |
| 0.3706 | 131.0 | 524 | 0.8147 |
| 0.3706 | 132.0 | 528 | 0.8142 |
| 0.3706 | 133.0 | 532 | 0.8136 |
| 0.3706 | 134.0 | 536 | 0.8136 |
| 0.3706 | 135.0 | 540 | 0.8138 |
| 0.3706 | 136.0 | 544 | 0.8139 |
| 0.3706 | 137.0 | 548 | 0.8140 |
| 0.3706 | 138.0 | 552 | 0.8138 |
| 0.3706 | 139.0 | 556 | 0.8134 |
| 0.3706 | 140.0 | 560 | 0.8130 |
| 0.3706 | 141.0 | 564 | 0.8128 |
| 0.3706 | 142.0 | 568 | 0.8127 |
| 0.3706 | 143.0 | 572 | 0.8126 |
| 0.3706 | 144.0 | 576 | 0.8124 |
| 0.3706 | 145.0 | 580 | 0.8123 |
| 0.3706 | 146.0 | 584 | 0.8121 |
| 0.3706 | 147.0 | 588 | 0.8120 |
| 0.3706 | 148.0 | 592 | 0.8120 |
| 0.3706 | 149.0 | 596 | 0.8120 |
| 0.3706 | 150.0 | 600 | 0.8120 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
GloryKuo/llama2_medical_qlora | GloryKuo | 2024-07-01T16:46:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:DavidLanz/Llama2-tw-7B-v2.0.1-chat",
"region:us"
] | null | 2024-07-01T16:45:45Z | ---
base_model: DavidLanz/Llama2-tw-7B-v2.0.1-chat
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
sharathprasaath/Phi-3-mini | sharathprasaath | 2024-07-01T16:48:24Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T16:48:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sharathprasaath/Phi-3-min | sharathprasaath | 2024-07-01T16:48:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T16:48:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kr-manish/distilgpt2-finetuned-rawHrPolicy | kr-manish | 2024-07-01T16:50:14Z | 0 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T16:49:15Z | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_keras_callback
model-index:
- name: kr-manish/distilgpt2-finetuned-rawHrPolicy
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kr-manish/distilgpt2-finetuned-rawHrPolicy
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0134
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 4.0134 | 0 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hainc2/llmhainc | hainc2 | 2024-07-02T09:10:13Z | 0 | 0 | null | [
"license:llama3",
"region:us"
] | null | 2024-07-01T16:51:22Z | ---
license: llama3
---
|
Chairles-alex/mistral-two | Chairles-alex | 2024-07-01T16:52:34Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T16:51:23Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: mistralai/Mistral-7B-Instruct-v0.3
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
robinhub/robin_model | robinhub | 2024-07-02T08:21:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:taide/TAIDE-LX-7B-Chat",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T16:54:19Z | ---
base_model: taide/TAIDE-LX-7B-Chat
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** robinhub
- **License:** apache-2.0
- **Finetuned from model :** taide/TAIDE-LX-7B-Chat
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Adzka/test-reward-model | Adzka | 2024-07-01T21:30:05Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:w11wo/indonesian-roberta-base-sentiment-classifier",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-01T16:54:29Z | ---
license: mit
base_model: w11wo/indonesian-roberta-base-sentiment-classifier
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-reward-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-reward-model
This model is a fine-tuned version of [w11wo/indonesian-roberta-base-sentiment-classifier](https://huggingface.co/w11wo/indonesian-roberta-base-sentiment-classifier) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2784
- Accuracy: 0.8817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7179 | 0.67 | 50 | 0.6866 | 0.6237 |
| 0.6866 | 1.33 | 100 | 0.6661 | 0.7742 |
| 0.6546 | 2.0 | 150 | 0.6039 | 0.8280 |
| 0.5421 | 2.67 | 200 | 0.4624 | 0.8172 |
| 0.3965 | 3.33 | 250 | 0.3958 | 0.8280 |
| 0.3244 | 4.0 | 300 | 0.3502 | 0.8495 |
| 0.251 | 4.67 | 350 | 0.4012 | 0.8602 |
| 0.1579 | 5.33 | 400 | 0.3184 | 0.8602 |
| 0.135 | 6.0 | 450 | 0.3141 | 0.8710 |
| 0.1114 | 6.67 | 500 | 0.3474 | 0.8495 |
| 0.0929 | 7.33 | 550 | 0.2931 | 0.8495 |
| 0.0829 | 8.0 | 600 | 0.2757 | 0.8710 |
| 0.0834 | 8.67 | 650 | 0.2889 | 0.8817 |
| 0.057 | 9.33 | 700 | 0.2810 | 0.8925 |
| 0.0503 | 10.0 | 750 | 0.2800 | 0.8817 |
| 0.062 | 10.67 | 800 | 0.2806 | 0.8817 |
| 0.0303 | 11.33 | 850 | 0.2971 | 0.8817 |
| 0.0246 | 12.0 | 900 | 0.2784 | 0.8817 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.15.2
|
starnet/11-star-07-01-02 | starnet | 2024-07-01T16:57:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T16:54:32Z | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.