modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Arovincy/CustomizedTextGeneration | Arovincy | 2024-06-30T17:57:45Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T17:57:15Z | Entry not found |
LaLaf93/inproceedings_recognizer | LaLaf93 | 2024-06-30T18:07:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-30T18:00:11Z | Entry not found |
TomEijkelenkamp/renaissance-cogvlm-composition | TomEijkelenkamp | 2024-06-30T18:00:51Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:00:51Z | Entry not found |
6pu8wtw6/UncensoredPonyXL | 6pu8wtw6 | 2024-06-30T18:00:52Z | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | 2024-06-30T18:00:52Z | ---
license: unknown
---
|
apwic/summarization-base-0 | apwic | 2024-06-30T23:41:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"id",
"base_model:LazarusNLP/IndoNanoT5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-06-30T18:04:33Z | ---
language:
- id
license: apache-2.0
base_model: LazarusNLP/IndoNanoT5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarization-base-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization-base-0
This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5082
- Rouge1: 0.3572
- Rouge2: 0.0
- Rougel: 0.3545
- Rougelsum: 0.3557
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.6282 | 1.0 | 3566 | 0.4719 | 0.43 | 0.0 | 0.4255 | 0.4282 | 1.0 |
| 0.4301 | 2.0 | 7132 | 0.4728 | 0.3754 | 0.0 | 0.3711 | 0.3719 | 1.0 |
| 0.3336 | 3.0 | 10698 | 0.4632 | 0.3806 | 0.0 | 0.3777 | 0.3808 | 1.0 |
| 0.2643 | 4.0 | 14264 | 0.4921 | 0.3537 | 0.0 | 0.3512 | 0.3514 | 1.0 |
| 0.2174 | 5.0 | 17830 | 0.5082 | 0.3572 | 0.0 | 0.3545 | 0.3557 | 1.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
LaLaf93/incollection_recognizer | LaLaf93 | 2024-06-30T18:14:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-30T18:07:40Z | Entry not found |
CarlosPov/Llama-2-7b-chat-hf-finetune_90_10_MIX | CarlosPov | 2024-06-30T18:09:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null | 2024-06-30T18:08:23Z | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-chat-hf
model-index:
- name: Llama-2-7b-chat-hf-finetune_90_10_MIX
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-chat-hf-finetune_90_10_MIX
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: reduce_lr_on_plateau
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.6795 | 0.9968 | 316 | 0.7737 |
| 0.2756 | 1.9937 | 632 | 0.8534 |
| 0.166 | 2.9905 | 948 | 0.9507 |
| 0.1135 | 3.9874 | 1264 | 1.0163 |
| 0.086 | 4.9842 | 1580 | 1.0497 |
| 0.0788 | 5.9811 | 1896 | 1.0818 |
| 0.1423 | 6.9779 | 2212 | 1.1176 |
| 0.0778 | 7.9748 | 2528 | 1.1538 |
| 0.0792 | 8.9716 | 2844 | 1.1963 |
| 0.0657 | 9.9685 | 3160 | 1.1900 |
| 0.0639 | 10.9653 | 3476 | 1.2259 |
| 0.0681 | 11.9621 | 3792 | 1.2195 |
| 0.0522 | 12.9590 | 4108 | 1.2163 |
| 0.0492 | 13.9558 | 4424 | 1.2259 |
| 0.048 | 14.9527 | 4740 | 1.2378 |
| 0.0441 | 15.9495 | 5056 | 1.2492 |
| 0.0629 | 16.9464 | 5372 | 1.2564 |
| 0.0622 | 17.9432 | 5688 | 1.2606 |
| 0.0589 | 18.9401 | 6004 | 1.2662 |
| 0.0592 | 19.9369 | 6320 | 1.2712 |
| 0.0586 | 20.9338 | 6636 | 1.2780 |
| 0.0594 | 21.9306 | 6952 | 1.2807 |
| 0.0616 | 22.9274 | 7268 | 1.2874 |
| 0.0554 | 23.9243 | 7584 | 1.2904 |
| 0.0562 | 24.9211 | 7900 | 1.2934 |
| 0.0543 | 25.9180 | 8216 | 1.2961 |
| 0.0553 | 26.9148 | 8532 | 1.2986 |
| 0.0547 | 27.9117 | 8848 | 1.3009 |
| 0.0543 | 28.9085 | 9164 | 1.3025 |
| 0.0535 | 29.9054 | 9480 | 1.3040 |
| 0.0535 | 30.9022 | 9796 | 1.3053 |
| 0.0533 | 31.8991 | 10112 | 1.3068 |
| 0.053 | 32.8959 | 10428 | 1.3078 |
| 0.0528 | 33.8927 | 10744 | 1.3096 |
| 0.0526 | 34.8896 | 11060 | 1.3098 |
| 0.0523 | 35.8864 | 11376 | 1.3100 |
| 0.052 | 36.8833 | 11692 | 1.3102 |
| 0.0516 | 37.8801 | 12008 | 1.3104 |
| 0.0513 | 38.8770 | 12324 | 1.3105 |
| 0.0504 | 39.8738 | 12640 | 1.3107 |
| 0.0508 | 40.8707 | 12956 | 1.3109 |
| 0.0503 | 41.8675 | 13272 | 1.3111 |
| 0.0501 | 42.8644 | 13588 | 1.3114 |
| 0.0502 | 43.8612 | 13904 | 1.3116 |
| 0.05 | 44.8580 | 14220 | 1.3118 |
| 0.0498 | 45.8549 | 14536 | 1.3118 |
| 0.0517 | 46.8517 | 14852 | 1.3118 |
| 0.0496 | 47.8486 | 15168 | 1.3118 |
| 0.0486 | 48.8454 | 15484 | 1.3118 |
| 0.0475 | 49.8423 | 15800 | 1.3119 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
daviddextre/ModelsPonyXL2 | daviddextre | 2024-06-30T19:12:43Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:08:26Z | Entry not found |
csteinmetz1/afx-rep | csteinmetz1 | 2024-06-30T19:13:34Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-30T18:12:55Z | ---
license: apache-2.0
---
|
noobilal/LLaMA3-Steve-Jobs | noobilal | 2024-06-30T18:14:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T18:13:54Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** noobilal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LaLaf93/phdthesis_recognizer | LaLaf93 | 2024-06-30T18:21:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-30T18:14:34Z | Entry not found |
shantanudave/BERTopic_vjuly | shantanudave | 2024-06-30T18:18:02Z | 0 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2024-06-30T18:18:01Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# BERTopic_vjuly
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("shantanudave/BERTopic_vjuly")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 18
* Number of training documents: 8526
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | payment - pay - card - bank - money | 742 | Payment Issues Detection |
| 1 | load - slow - search - article - doesnt | 705 | Slow Search Function |
| 2 | clothes - clothing - size - fashion - large size | 683 | Large Size Quality Clothing |
| 3 | bon - - - - | 668 | bon documents collection |
| 4 | clear - intuitive - clear easy - recommend - selection | 665 | Easy Clear Navigation |
| 5 | - - - - | 649 | Keyword-Driven Document Analysis |
| 6 | shopping - staff - friendly - store - satisfy | 578 | Friendly staff satisfaction |
| 7 | delivery - fast delivery - fast - shipping - ship | 563 | Fast Delivery Quality |
| 8 | cart - shop cart - log - password - add | 548 | Shopping Cart Issues |
| 9 | easy use - easy - use - use easy - quick easy | 531 | Quick & Easy Solutions |
| 10 | awesome - excellent - think - clearly - phenomenal | 462 | Really Phenomenal Clear Thinking |
| 11 | quality - price - quality quality - price quality - comfortable | 454 | Excellent Quality Price |
| 12 | work work - work - work quickly - flawlessly - work flawlessly | 390 | Efficient Flawless Work |
| 13 | super super - super - superb - superb super - super friendly | 349 | Superb Friendly Coat |
| 14 | really simple - ra - solve problem - control - satisfied easy | 145 | User-Friendly Problem Solver |
| 15 | clear clear - clear - fast clear - clear fast - super clear | 144 | Clear and Transparent Working |
| 16 | discover - stuff good - stuff - fact - clearly | 129 | Discovering Interesting Facts |
| 17 | satisfied - satisfaction - totally satisfied - satisfied good - completely satisfied | 121 | Utmost Satisfaction |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 1.3.5
* Scikit-Learn: 1.4.1.post1
* Sentence-transformers: 2.6.1
* Transformers: 4.41.2
* Numba: 0.59.1
* Plotly: 5.22.0
* Python: 3.10.13
|
adnanshirik/astroclip | adnanshirik | 2024-06-30T21:59:15Z | 0 | 0 | null | [
"arxiv:2310.03024",
"region:us"
] | null | 2024-06-30T18:18:42Z | PyTorch Lightning model checkpoints for all models created in reproduction of [AstroCLIP: A Cross-Modal Foundation Model for Galaxies](https://arxiv.org/abs/2310.03024).
The reproduction is part of an assessed project and is currently private, if you are an assessor and require access to these saved model weights, please request access.
There are 7 model checkpoints, one for each embedding dimensionality in [8, 16, 32, 64, 128, 256, 512].
---
license: mit
---
|
ANDRIOIDEA/ANDRIOIDE | ANDRIOIDEA | 2024-06-30T18:19:04Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:19:04Z | Entry not found |
Nithusikan01/fine-tuned-llama-3-8B-customer-support | Nithusikan01 | 2024-06-30T18:19:20Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:19:20Z | Entry not found |
net31/naschainv148 | net31 | 2024-07-01T09:11:12Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:20:32Z | Entry not found |
habulaj/4532236697 | habulaj | 2024-06-30T18:21:17Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:21:10Z | Entry not found |
habulaj/12116496255 | habulaj | 2024-06-30T18:24:53Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:24:52Z | Entry not found |
hansa15100/model_3b_pt_r16_epoch10_wiki | hansa15100 | 2024-06-30T22:01:44Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-06-30T18:27:04Z | Entry not found |
bdsaglam/llama-3-8b-jerx-musique-peft-v99rbjcu | bdsaglam | 2024-06-30T18:28:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T18:28:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kpmiller/example-model | kpmiller | 2024-06-30T18:29:40Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-06-30T18:29:40Z | ---
license: mit
---
|
maninderjit829/first-repo | maninderjit829 | 2024-06-30T18:30:57Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:30:57Z | Entry not found |
maninderjit829/test-repo | maninderjit829 | 2024-06-30T18:31:59Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:31:59Z | Entry not found |
abhayesian/LLama3_HarmBench_LAT_9 | abhayesian | 2024-07-01T10:20:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T18:34:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Litzy619/MIS0630T1 | Litzy619 | 2024-07-01T01:14:22Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:36:56Z | Entry not found |
nourheshamshaheen/llava_8epochs | nourheshamshaheen | 2024-06-30T18:38:49Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:38:49Z | Entry not found |
mina-kdr/fr_to_daridja_translate | mina-kdr | 2024-06-30T21:30:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-30T18:41:17Z | Entry not found |
ZeZanZiet/ImageCaptioning | ZeZanZiet | 2024-06-30T18:42:22Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:42:22Z | Entry not found |
habulaj/1633517351 | habulaj | 2024-06-30T18:43:43Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:43:41Z | Entry not found |
shantanudave/BERTopic_v1_july | shantanudave | 2024-06-30T18:45:19Z | 0 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2024-06-30T18:45:18Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# BERTopic_v1_july
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("shantanudave/BERTopic_v1_july")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 18
* Number of training documents: 8526
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | payment - pay - card - bank - money | 742 | Payment Issues Detection |
| 1 | load - slow - search - article - doesnt | 705 | Slow Search Function |
| 2 | clothes - clothing - size - fashion - large size | 683 | Large Size Quality Clothing |
| 3 | bon - - - - | 668 | bon documents collection |
| 4 | clear - intuitive - clear easy - recommend - selection | 665 | Easy Clear Navigation |
| 5 | - - - - | 649 | Keyword-Driven Document Analysis |
| 6 | shopping - staff - friendly - store - satisfy | 578 | Friendly staff satisfaction |
| 7 | delivery - fast delivery - fast - shipping - ship | 563 | Fast Delivery Quality |
| 8 | cart - shop cart - log - password - add | 548 | Shopping Cart Issues |
| 9 | easy use - easy - use - use easy - quick easy | 531 | Quick & Easy Solutions |
| 10 | awesome - excellent - think - clearly - phenomenal | 462 | Really Phenomenal Clear Thinking |
| 11 | quality - price - quality quality - price quality - comfortable | 454 | Excellent Quality Price |
| 12 | work work - work - work quickly - flawlessly - work flawlessly | 390 | Efficient Flawless Work |
| 13 | super super - super - superb - superb super - super friendly | 349 | Superb Friendly Coat |
| 14 | really simple - ra - solve problem - control - satisfied easy | 145 | User-Friendly Problem Solver |
| 15 | clear clear - clear - fast clear - clear fast - super clear | 144 | Clear and Transparent Working |
| 16 | discover - stuff good - stuff - fact - clearly | 129 | Discovering Interesting Facts |
| 17 | satisfied - satisfaction - totally satisfied - satisfied good - completely satisfied | 121 | Utmost Satisfaction |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 1.3.5
* Scikit-Learn: 1.4.1.post1
* Sentence-transformers: 2.6.1
* Transformers: 4.41.2
* Numba: 0.59.1
* Plotly: 5.22.0
* Python: 3.10.13
|
shantanudave/BERTopic_v20240630_184948 | shantanudave | 2024-06-30T18:49:50Z | 0 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2024-06-30T18:49:48Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# BERTopic_v20240630_184948
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("shantanudave/BERTopic_v20240630_184948")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 18
* Number of training documents: 8526
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | payment - pay - card - bank - money | 742 | Payment Issues Detection |
| 1 | load - slow - search - article - doesnt | 705 | Slow Search Function |
| 2 | clothes - clothing - size - fashion - large size | 683 | Large Size Quality Clothing |
| 3 | bon - - - - | 668 | bon documents collection |
| 4 | clear - intuitive - clear easy - recommend - selection | 665 | Easy Clear Navigation |
| 5 | - - - - | 649 | Keyword-Driven Document Analysis |
| 6 | shopping - staff - friendly - store - satisfy | 578 | Friendly staff satisfaction |
| 7 | delivery - fast delivery - fast - shipping - ship | 563 | Fast Delivery Quality |
| 8 | cart - shop cart - log - password - add | 548 | Shopping Cart Issues |
| 9 | easy use - easy - use - use easy - quick easy | 531 | Quick & Easy Solutions |
| 10 | awesome - excellent - think - clearly - phenomenal | 462 | Really Phenomenal Clear Thinking |
| 11 | quality - price - quality quality - price quality - comfortable | 454 | Excellent Quality Price |
| 12 | work work - work - work quickly - flawlessly - work flawlessly | 390 | Efficient Flawless Work |
| 13 | super super - super - superb - superb super - super friendly | 349 | Superb Friendly Coat |
| 14 | really simple - ra - solve problem - control - satisfied easy | 145 | User-Friendly Problem Solver |
| 15 | clear clear - clear - fast clear - clear fast - super clear | 144 | Clear and Transparent Working |
| 16 | discover - stuff good - stuff - fact - clearly | 129 | Discovering Interesting Facts |
| 17 | satisfied - satisfaction - totally satisfied - satisfied good - completely satisfied | 121 | Utmost Satisfaction |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 1.3.5
* Scikit-Learn: 1.4.1.post1
* Sentence-transformers: 2.6.1
* Transformers: 4.41.2
* Numba: 0.59.1
* Plotly: 5.22.0
* Python: 3.10.13
|
Raja526/Bio_BERT_Task-ALL | Raja526 | 2024-06-30T18:50:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-06-30T18:49:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/9147367793 | habulaj | 2024-06-30T18:50:01Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:49:57Z | Entry not found |
darshan-aiml/nycartooncaptioncontest-git-base | darshan-aiml | 2024-07-01T05:57:29Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:52:39Z | Entry not found |
habulaj/62039211817 | habulaj | 2024-06-30T18:53:23Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:53:20Z | Entry not found |
Med-tz/category_classifier | Med-tz | 2024-06-30T18:53:52Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:53:52Z | Entry not found |
dbands/mistral-7b-instruct-v0.3-bnb | dbands | 2024-06-30T19:00:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-30T18:54:49Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** dbands
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nithusikan01/fine-tuned-flan-t5-large-customer-support | Nithusikan01 | 2024-06-30T18:54:51Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:54:51Z | Entry not found |
Abdelrahman2922/distilbert-base-uncased-finetuned-Disaster_tweets | Abdelrahman2922 | 2024-07-01T19:16:21Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-30T18:56:35Z | Entry not found |
AgastyaMalik/GarbageNet | AgastyaMalik | 2024-07-01T15:42:16Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:57:29Z | # GarbageNet Model Card
## Model Details
Model Name: GarbageNet
Model Architecture: ResNet-50
Number of Classes: 10
Dataset: Garbage Classification V2, https://www.kaggle.com/datasets/sumn2u/garbage-classification-v2
### Model Description
GarbageNet is a convolutional neural network designed to classify images of garbage into one of 10 predefined categories. The model leverages the ResNet-50 architecture, which is known for its deep residual learning capabilities, enabling it to achieve high accuracy even with relatively fewer training epochs.
Developed by: Agastya Malik
Model type: Image Classification
Finetuned from model: ResNet-50
# Uses
## Direct Use
GarbageNet is intended to be used as a tool for sorting and classifying images of garbage. This can be particularly useful for waste management systems, recycling facilities, and environmental monitoring applications. The model can be directly used through the provided Gradio interface to classify uploaded images of garbage.
## Out-of-Scope Use
GarbageNet is not suitable for:
Classifying non-garbage items.
High-stakes applications where misclassification can lead to significant consequences.
Situations requiring real-time processing on devices with limited computational power.
## Bias, Risks, and Limitations
GarbageNet, like all machine learning models, has inherent limitations and potential biases:
Bias: The model's performance may vary based on the diversity of the training dataset. If the dataset lacks sufficient examples of certain categories or specific types of images, the model may not perform well on those.
Risks: Misclassification can lead to incorrect sorting of waste, which may affect recycling processes and waste management efficiency.
Limitations: The model may not perform well in poor lighting conditions, with low-resolution images, or with objects that belong to multiple categories.
## Recommendations
Use high-quality, well-lit images for classification.
Continuously monitor and validate the model's performance in real-world scenarios.
Be cautious when deploying the model in critical applications, and consider augmenting the dataset to improve performance on underrepresented categories.
## Training Details
Training Dataset
The training dataset is sourced from Kaggle and contains images classified into the following categories:
Cardboard
Glass
Metal
Paper
Plastic
Trash
Battery
Clothes
Shoes
Electronics
### Preprocessing:
Before feeding the images into the model, the following preprocessing steps were applied:
Resizing images to 224x224 pixels.
Normalizing pixel values to the range [0, 1].
Applying data augmentation techniques such as rotation, flipping, and color jitter to increase the diversity of the training data.
### Training Configuration
Optimizer: Adam
Learning Rate: 0.001
Beta1: 0.9
Beta2: 0.999
Loss Function: CrossEntropyLoss
Epochs: 5
Batch Size: 32
Learning Rate Scheduler: StepLR
Step Size: 2
Gamma: 0.1
## Evaluation
The model was evaluated using a separate validation set, achieving the following performance metrics:
Accuracy: 93%
Precision: 92%
Recall: 91%
F1 Score: 91%
Confusion matrices and ROC curves were also generated to provide deeper insights into the model's performance across different classes.
|
mahamadahmed/ser | mahamadahmed | 2024-06-30T18:59:45Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T18:59:44Z | Entry not found |
Jaygeo067/llama-2-Trgoejay | Jaygeo067 | 2024-06-30T19:07:49Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T19:02:44Z | Entry not found |
maxseats/tmp | maxseats | 2024-07-03T01:04:38Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-06-30T19:05:43Z | Entry not found |
uma-wandb/my_video_model | uma-wandb | 2024-07-01T04:07:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-06-30T19:06:26Z | Entry not found |
habulaj/10820283115 | habulaj | 2024-06-30T19:08:23Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T19:08:19Z | Entry not found |
ZeZanZiet/blip_image_captioning_v1 | ZeZanZiet | 2024-07-01T04:27:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T19:09:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maninderjit829/test2 | maninderjit829 | 2024-06-30T19:12:58Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T19:12:58Z | Entry not found |
maninderjit829/xoxo | maninderjit829 | 2024-06-30T19:13:53Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T19:13:53Z | Entry not found |
chihli/llama-3-8b-chat-doctor-1 | chihli | 2024-07-01T10:15:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T19:18:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/233173204625 | habulaj | 2024-06-30T19:18:39Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T19:18:37Z | Entry not found |
aadd77551/AI-test | aadd77551 | 2024-06-30T19:19:49Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T19:19:49Z | Entry not found |
habulaj/131864108578 | habulaj | 2024-06-30T19:21:13Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T19:21:10Z | Entry not found |
senagoksu/opus-mt-en-ro-finetuned-en-to-ro | senagoksu | 2024-06-30T19:22:18Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T19:22:18Z | Entry not found |
Maarten1953/pegasus-samsum | Maarten1953 | 2024-06-30T20:29:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-30T19:22:36Z | ---
base_model: google/pegasus-cnn_dailymail
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6719 | 0.5430 | 500 | 1.4844 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.2.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
JuliusFx/merged_model_opt_exp | JuliusFx | 2024-06-30T22:29:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T19:23:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aklein4/Qwen2-0.5B-tldr-dpo-1.0 | aklein4 | 2024-06-30T19:26:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T19:25:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thesanjeetc/qlora1 | thesanjeetc | 2024-06-30T19:27:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T19:27:01Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** thesanjeetc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chirazman/my_awesome_qa_model | chirazman | 2024-06-30T19:29:00Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T19:29:00Z | Entry not found |
vidula123/Llama-2-7b-chat-finetune-GGUF | vidula123 | 2024-06-30T21:50:57Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"text-generation",
"en",
"dataset:Vidulaae/sales_target",
"dataset:Vidulaae/sales-analysis",
"dataset:vidula123/Sales_Queries",
"dataset:Vidulaae/demo-data",
"dataset:Vidulaae/sales_analysis1",
"license:llama2",
"region:us"
] | text-generation | 2024-06-30T19:31:14Z | ---
license: llama2
datasets:
- Vidulaae/sales_target
- Vidulaae/sales-analysis
- vidula123/Sales_Queries
- Vidulaae/demo-data
- Vidulaae/sales_analysis1
language:
- en
pipeline_tag: text-generation
library_name: adapter-transformers
--- |
thisiskeithkwan/stanford-deidentifier-base-onnx | thisiskeithkwan | 2024-06-30T19:38:57Z | 0 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T19:37:24Z | Entry not found |
kartikay101/whisper-small-hi | kartikay101 | 2024-07-01T07:34:07Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:Wtimit_vowel_consonent_mask_spec_aug",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-06-30T19:37:56Z | ---
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- Wtimit_vowel_consonent_mask_spec_aug
metrics:
- wer
model-index:
- name: Whisper Small Testing
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Wtimit_vowel_consonent_mask_spec_aug
type: Wtimit_vowel_consonent_mask_spec_aug
metrics:
- name: Wer
type: wer
value: 19.044740024183795
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Testing
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Wtimit_vowel_consonent_mask_spec_aug dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2954
- Wer: 19.0447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.9758 | 0.0213 | 100 | 0.7484 | 18.7534 |
| 0.2983 | 0.0426 | 200 | 0.3043 | 17.6871 |
| 0.2397 | 0.0638 | 300 | 0.2875 | 17.1375 |
| 0.2283 | 0.0851 | 400 | 0.2787 | 17.1320 |
| 0.1904 | 0.1064 | 500 | 0.2780 | 16.7583 |
| 0.1799 | 0.1277 | 600 | 0.2748 | 16.7033 |
| 0.1782 | 0.1489 | 700 | 0.2748 | 16.7308 |
| 0.1522 | 0.1702 | 800 | 0.2726 | 16.6154 |
| 0.1326 | 0.1915 | 900 | 0.2687 | 16.4450 |
| 0.138 | 0.2128 | 1000 | 0.2702 | 16.3351 |
| 0.1317 | 0.2340 | 1100 | 0.2715 | 16.6978 |
| 0.1312 | 0.2553 | 1200 | 0.2712 | 16.7748 |
| 0.1222 | 0.2766 | 1300 | 0.2718 | 16.6209 |
| 0.1181 | 0.2979 | 1400 | 0.2736 | 17.1870 |
| 0.0975 | 0.3191 | 1500 | 0.2710 | 16.8352 |
| 0.0795 | 0.3404 | 1600 | 0.2718 | 16.8352 |
| 0.0791 | 0.3617 | 1700 | 0.2742 | 16.8847 |
| 0.0822 | 0.3830 | 1800 | 0.2744 | 16.6758 |
| 0.0734 | 0.4043 | 1900 | 0.2757 | 17.1155 |
| 0.0896 | 0.4255 | 2000 | 0.2771 | 17.2749 |
| 0.0578 | 0.4468 | 2100 | 0.2769 | 17.3299 |
| 0.0727 | 0.4681 | 2200 | 0.2800 | 17.6652 |
| 0.0691 | 0.4894 | 2300 | 0.2793 | 17.4893 |
| 0.0656 | 0.5106 | 2400 | 0.2787 | 17.3574 |
| 0.0726 | 0.5319 | 2500 | 0.2793 | 17.5662 |
| 0.0494 | 0.5532 | 2600 | 0.2807 | 17.6487 |
| 0.0635 | 0.5745 | 2700 | 0.2800 | 17.7091 |
| 0.0503 | 0.5957 | 2800 | 0.2837 | 17.8026 |
| 0.0688 | 0.6170 | 2900 | 0.2820 | 17.7531 |
| 0.058 | 0.6383 | 3000 | 0.2858 | 18.1269 |
| 0.051 | 0.6596 | 3100 | 0.2871 | 18.1159 |
| 0.0535 | 0.6809 | 3200 | 0.2870 | 18.4951 |
| 0.0665 | 0.7021 | 3300 | 0.2868 | 18.5776 |
| 0.0497 | 0.7234 | 3400 | 0.2891 | 18.6105 |
| 0.0558 | 0.7447 | 3500 | 0.2891 | 18.5446 |
| 0.0384 | 0.7660 | 3600 | 0.2891 | 18.6820 |
| 0.0413 | 0.7872 | 3700 | 0.2908 | 18.7369 |
| 0.0562 | 0.8085 | 3800 | 0.2916 | 18.6655 |
| 0.0523 | 0.8298 | 3900 | 0.2920 | 18.6600 |
| 0.043 | 0.8511 | 4000 | 0.2928 | 18.7260 |
| 0.0463 | 0.8723 | 4100 | 0.2926 | 18.6765 |
| 0.0517 | 0.8936 | 4200 | 0.2942 | 18.7809 |
| 0.0408 | 0.9149 | 4300 | 0.2950 | 18.7644 |
| 0.0362 | 0.9362 | 4400 | 0.2954 | 18.8799 |
| 0.047 | 0.9574 | 4500 | 0.2954 | 18.9623 |
| 0.0347 | 0.9787 | 4600 | 0.2954 | 19.0118 |
| 0.0404 | 1.0 | 4700 | 0.2956 | 19.0392 |
| 0.0559 | 1.0213 | 4800 | 0.2956 | 19.0063 |
| 0.0462 | 1.0426 | 4900 | 0.2956 | 19.1162 |
| 0.0385 | 1.0638 | 5000 | 0.2954 | 19.0447 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
fbaldassarri/modello-italia-9B-GGUF | fbaldassarri | 2024-07-02T19:26:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"gpt_neox",
"text-generation",
"pytorch",
"conversational",
"it",
"base_model:sapienzanlp/modello-italia-9b",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T19:38:25Z | ---
language:
- it
license: mit
tags:
- pytorch
model_name: Modello Italia 9B
base_model: sapienzanlp/modello-italia-9b
inference: false
model_creator: iGeniusAI
model_type: gpt-neonx
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
# Model Card for Modello Italia 9B GGUFs
This an UNOFFICIAL GGUF format model files repository for converted/quantized OFFICIAL model checkpoint of *"Modello Italia 9B"*, Large Language Model (LLM) developed by [iGenius](https://it.igenius.ai/) in collaboration with [CINECA](https://www.cineca.it/).
* More information about Modello Italia: [click here](https://it.igenius.ai/language-models).
## π¨ Disclaimers
* This is an UNOFFICIAL quantization of the OFFICIAL model checkpoint released by iGenius.
* This model is based also on the conversion made for HF Transformers by [Sapienza NLP, Sapienza University of Rome](https://huggingface.co/sapienzanlp).
* The original model was developed using LitGPT, therefore, the weights need to be converted before they can be used with Hugging Face transformers.
## π¨ Terms and Conditions
* **Note:** By using this model, you accept the iGenius' [**terms and conditions**](https://secure.igenius.ai/legal/italia_terms_and_conditions.pdf).
### π¨ About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
## π¨ Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th 2023 onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## π¨ Reproducibility
This model has been converted/quantized using Intel [neural-speed](https://github.com/intel/neural-speed/).
## π¨ Biases and Risks
From the terms and conditions of iGenius for Modello Italia:
> Modello Italia Γ¨ concepito per essere utilizzato da tutti e per adattarsi a una vasta gamma di casi
d'uso. Γ stato progettato con l'obiettivo di essere accessibile a persone provenienti da
background, esperienze e prospettive diverse. Modello Italia si rivolge agli utenti e alle loro
esigenze senza inserire giudizi superflui o normative, riconoscendo al contempo che anche
contenuti potenzialmente problematici in determinati contesti possono avere scopi validi in altri.
Il rispetto per la dignitΓ e l'autonomia di tutti gli utenti, specialmente in termini di libertΓ di
pensiero ed espressione, Γ¨ un pilastro fondamentale del suo design. Tuttavia, essendo una nuova
tecnologia, Modello Italia comporta rischi legati al suo utilizzo. I test condotti finora sono stati
eseguiti in italiano e non hanno potuto coprire tutte le possibili situazioni. Pertanto, come per
tutti gli LLM, non Γ¨ possibile prevedere in anticipo gli output di Modello Italia e il modello
potrebbe in alcuni casi generare risposte imprecise, tendenziose o altre risposte discutibili. Prima
di utilizzare Modello Italia in qualsiasi contesto, gli sviluppatori sono fortemente incoraggiati a
eseguire test di sicurezza e adattamento specifici per le loro applicazioni.
We are aware of the biases and potential problematic/toxic content that current pretrained large language models exhibit: more specifically, as probabilistic models of (Italian and English) languages, they reflect and amplify the biases of their training data.
For more information about this issue, please refer to our survey paper:
* [Biases in Large Language Models: Origins, Inventory, and Discussion](https://dl.acm.org/doi/full/10.1145/3597307)
## Model architecture
* The model architecture is **based on GPT-NeoX**.
|
silveroxides/Vision_8B_Uncensored_4bit | silveroxides | 2024-06-30T19:38:53Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T19:38:53Z | Entry not found |
aklein4/Qwen2-0.5B-tldr-dro-binary-1.0 | aklein4 | 2024-06-30T19:40:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T19:39:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Anujgr8/Whisper-Anuj-Medum-Medium-lalo | Anujgr8 | 2024-07-01T06:18:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-06-30T19:40:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ewjfwejfoiwe/Unet_GAN_Self-driving-car-vision-segmentation | ewjfwejfoiwe | 2024-06-30T19:45:49Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-06-30T19:42:30Z | ---
license: mit
---
|
talhaturab/my-first-model | talhaturab | 2024-06-30T19:46:37Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"animal",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-06-30T19:44:40Z | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
widget:
- text: a photo of with face of talhamax cat and body of a elephant
---
# DreamBooth model for the custom finetunig concept trained by talha on the max_cat dataset.
This is a Stable Diffusion model fine-tuned on the max-pics concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of talhamax cat**
This model was created as part of the DreamBooth Hackathon π₯. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is demo model
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('talhaturab/my-first-model')
image = pipeline().images[0]
image
```
|
2052man/vira-reservation | 2052man | 2024-06-30T19:47:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-30T19:47:30Z | ---
license: apache-2.0
---
|
LowFace/newtest | LowFace | 2024-06-30T19:53:01Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T19:53:01Z | Entry not found |
erayyapagci/multilingual-e5-onnx-vespa | erayyapagci | 2024-06-30T20:06:31Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2024-06-30T19:54:45Z | Entry not found |
senagoksu/t5-small-finetuned-xsum | senagoksu | 2024-07-01T11:36:19Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"base_model:t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-06-30T19:58:02Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 27.9257
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5074
- Rouge1: 27.9257
- Rouge2: 7.4618
- Rougel: 21.9338
- Rougelsum: 21.9405
- Gen Len: 18.8176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.817 | 0.0784 | 500 | 2.5683 | 26.6596 | 6.6324 | 20.7701 | 20.7761 | 18.8057 |
| 2.8029 | 0.1568 | 1000 | 2.5435 | 27.1558 | 6.9694 | 21.2178 | 21.2216 | 18.7999 |
| 2.7797 | 0.2352 | 1500 | 2.5270 | 27.5528 | 7.2608 | 21.621 | 21.6233 | 18.7982 |
| 2.7651 | 0.3137 | 2000 | 2.5165 | 27.6104 | 7.2896 | 21.6928 | 21.7012 | 18.8133 |
| 2.7514 | 0.3921 | 2500 | 2.5112 | 27.8452 | 7.3791 | 21.8632 | 21.8659 | 18.8118 |
| 2.7463 | 0.4705 | 3000 | 2.5074 | 27.9257 | 7.4618 | 21.9338 | 21.9405 | 18.8176 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Tristan7234/Emma | Tristan7234 | 2024-06-30T19:59:04Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-06-30T19:59:04Z | ---
license: mit
---
|
Samiyar/Teste | Samiyar | 2024-06-30T19:59:08Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-06-30T19:59:08Z | ---
license: openrail
---
|
adamkarvonen/othello-saes | adamkarvonen | 2024-06-30T20:10:00Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:02:15Z | Entry not found |
ANDREBARRETOLOPES/Andre | ANDREBARRETOLOPES | 2024-06-30T20:03:46Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-30T20:03:46Z | ---
license: apache-2.0
---
|
ralphkalweit/Reinforce-PixelCopter | ralphkalweit | 2024-06-30T20:28:11Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-06-30T20:04:26Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 8.30 +/- 9.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Litzy619/MIS0630T2 | Litzy619 | 2024-07-01T02:36:00Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:05:43Z | Entry not found |
random2344/vector2 | random2344 | 2024-06-30T20:09:07Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:08:48Z | Entry not found |
sindhujag26/distilbert-base-uncased-finetuned-ner | sindhujag26 | 2024-07-01T12:59:17Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-06-30T20:09:58Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0020
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 50 | 0.0208 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 2.0 | 100 | 0.0027 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 3.0 | 150 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2+cpu
- Datasets 2.19.2
- Tokenizers 0.19.1
|
pedroharaujo/emma_lora | pedroharaujo | 2024-06-30T23:18:49Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:10:05Z | Entry not found |
lit9003code/melotts300 | lit9003code | 2024-06-30T20:14:11Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:13:50Z | Entry not found |
lit9003code/melotts301 | lit9003code | 2024-06-30T20:15:46Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:15:25Z | Entry not found |
lit9003code/melotts302 | lit9003code | 2024-06-30T20:18:16Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:17:01Z | Entry not found |
lit9003code/melotts303 | lit9003code | 2024-06-30T20:20:01Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:19:38Z | Entry not found |
lit9003code/melotts304 | lit9003code | 2024-06-30T20:21:34Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:21:14Z | Entry not found |
AnotherNN/Losk | AnotherNN | 2024-06-30T20:23:12Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:21:25Z | Entry not found |
Moriacrafter/Qwen1.5-0.5B-8bit_DepressionDetection | Moriacrafter | 2024-06-30T20:22:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T20:22:11Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lit9003code/melotts305 | lit9003code | 2024-06-30T20:23:11Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:22:49Z | Entry not found |
FartLabs/FART_SMILES_tokenized_PubChem_shard00_160k_augmented | FartLabs | 2024-06-30T20:23:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-30T20:23:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lit9003code/melotts306 | lit9003code | 2024-06-30T20:25:38Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:24:29Z | Entry not found |
iceman2434/xlm-roberta-base-ft-udpos213-top2langrandom | iceman2434 | 2024-06-30T20:32:46Z | 0 | 0 | null | [
"token-classification",
"tl",
"dataset:universal_dependencies",
"region:us"
] | token-classification | 2024-06-30T20:26:33Z | ---
datasets:
- universal_dependencies
language:
- tl
metrics:
- f1
pipeline_tag: token-classification
---
## Model Specification
- Model: XLM-RoBERTa (base-sized model)
- Randomized training order of languages
- Training Data:
- Combined Afrikaans & Norwegian corpora (Top 2 Languages)
- Training Details:
- Base configurations with learning rate 5e-5
## Evaluation
- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 75.58\% Accuracy)
## POS Tags
- ADJ β ADP β ADV β CCONJ β DET β INTJ β NOUN β NUM β PART β PRON β PROPN β PUNCT β SCONJ β VERB |
lit9003code/melotts307 | lit9003code | 2024-06-30T20:27:21Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:26:58Z | Entry not found |
aadd77551/test | aadd77551 | 2024-06-30T20:27:40Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:27:40Z | Entry not found |
iceman2434/xlm-roberta-base-ft-udpos213-top3langrandom | iceman2434 | 2024-06-30T20:33:12Z | 0 | 0 | null | [
"token-classification",
"tl",
"dataset:universal_dependencies",
"region:us"
] | token-classification | 2024-06-30T20:28:23Z | ---
datasets:
- universal_dependencies
language:
- tl
metrics:
- f1
pipeline_tag: token-classification
---
## Model Specification
- Model: XLM-RoBERTa (base-sized model)
- Randomized training order of languages
- Training Data:
- Combined Afrikaans, Norwegian, & Vietnamese corpora (Top 3 Languages)
- Training Details:
- Base configurations with learning rate 5e-5
## Evaluation
- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 75.29\% Accuracy)
## POS Tags
- ADJ β ADP β ADV β CCONJ β DET β INTJ β NOUN β NUM β PART β PRON β PROPN β PUNCT β SCONJ β VERB |
lit9003code/melotts308 | lit9003code | 2024-06-30T20:28:58Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:28:38Z | Entry not found |
asafi/Meta-Llama-3-medical-8B-merged | asafi | 2024-06-30T20:34:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T20:29:22Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iceman2434/xlm-roberta-base-ft-udpos213-top4langrandom | iceman2434 | 2024-06-30T20:33:33Z | 0 | 0 | null | [
"token-classification",
"tl",
"dataset:universal_dependencies",
"region:us"
] | token-classification | 2024-06-30T20:30:11Z | ---
datasets:
- universal_dependencies
language:
- tl
metrics:
- f1
pipeline_tag: token-classification
---
## Model Specification
- Model: XLM-RoBERTa (base-sized model)
- Randomized training order of languages
- Training Data:
- Afrikaans, Norwegian, Vietnamese, & Hebrew (Top 4 Languages)
- Training Details:
- Base configurations with learning rate 5e-5
## Evaluation
- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 77.55\% Accuracy)
## POS Tags
- ADJ β ADP β ADV β CCONJ β DET β INTJ β NOUN β NUM β PART β PRON β PROPN β PUNCT β SCONJ β VERB |
lit9003code/melotts309 | lit9003code | 2024-06-30T20:31:28Z | 0 | 0 | null | [
"region:us"
] | null | 2024-06-30T20:30:15Z | Entry not found |
DimensionSTP/Llama-3-KoEn-8B-Instruct-preview-scientificQA | DimensionSTP | 2024-06-30T20:39:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"llama-3-ko",
"conversational",
"en",
"ko",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-30T20:30:25Z | ---
language:
- en
- ko
license: cc-by-nc-sa-4.0
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-3-ko
pipeline_tag: text-generation
license_name: llama3
license_link: LICENSE
---
## Model Details
**This model is fine-tuned by beomi/Llama-3-KoEn-8B-Instruct-preview**
**Fine-tuning dataset: Scientific QA dataset**
```
|
tarsssss/finetuned-kde4-pt-to-ca-2 | tarsssss | 2024-07-01T04:16:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:open_subtitles",
"base_model:Helsinki-NLP/opus-mt-pt-ca",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-30T20:31:02Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-pt-ca
tags:
- generated_from_trainer
datasets:
- open_subtitles
metrics:
- bleu
model-index:
- name: finetuned-kde4-pt-to-ca-2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: open_subtitles
type: open_subtitles
config: ca-pt
split: train
args: ca-pt
metrics:
- name: Bleu
type: bleu
value: 35.718810961905895
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-kde4-pt-to-ca-2
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-pt-ca](https://huggingface.co/Helsinki-NLP/opus-mt-pt-ca) on the open_subtitles dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0943
- Bleu: 35.7188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.