modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 18:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 18:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
fireworks-ai/firefunction-v1 | fireworks-ai | 2024-03-06T21:24:22Z | 31 | 126 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"function-calling",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-16T06:58:00Z | ---
license: apache-2.0
tags:
- function-calling
---
# Fireworks Function Calling (FireFunction) Model V1
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b6f3a72f5a966b9722de88/12mfdeAJzW1NdKrN_J--L.png" alt="firefunction" width="400"/>
FireFunction is a state-of-the-art function calling model with a commercially viable license. Key info and highlights:
π‘ The model is also hosted on the [Fireworks](https://fireworks.ai/models/fireworks/firefunction-v1) platform. Offered for free during a limited beta period
βοΈ Near GPT-4 level quality for real-world use cases of structured information generation and routing decision-making
π¨ Blazing fast speed. Inference speeds are roughly 4x that of GPT-4 when using FireFunction hosted on the Fireworks platform
π Support for "any" parameter in tool_choice. Firefunction is the only model that we're aware that supports an option for the model to always choose a function - particularly helpful for routing use cases
β
The model is also API compatible with [OpenAI function calling](https://platform.openai.com/docs/guides/function-calling).
```sh
OPENAI_API_BASE=https://api.fireworks.ai/inference/v1
OPENAI_API_KEY=<YOUR_FIREWORKS_API_KEY>
MODEL=accounts/fireworks/models/firefunction-v1
```
## Resources
* [FireFunction-v1 Blog Post](https://fireworks.ai/blog/firefunction-v1-gpt-4-level-function-calling)
* [Fireworks discord with function calling channel](https://discord.gg/mMqQxvFD9A)
* [Documentation](https://readme.fireworks.ai/docs/function-calling)
* [UI Demo app](https://functional-chat.vercel.app/)
* [Try in Fireworks prompt playground UI](https://fireworks.ai/models/fireworks/firefunction-v1)
* [Running Locally with Ollama](https://ollama.com/joefamous/firefunction-v1/tags)
# Intended Use and Limitations
### Primary Use
Although the model was trained on a variety of tasks, it performs best on:
* single-turn request routing to a function picked from a pool of up to 20 function specs.
* structured information extraction.
See [blog post](https://fireworks.ai/blog) for more info on FireFunction.
### Out-of-Scope Use
The model was not optimized for the following use cases:
* general multi-turn chat,
* parallel and nested function calls in a single response. These can be broken into multiple messages.
## Example Usage
See [documentation](https://readme.fireworks.ai/docs/function-calling) for more detail.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import json
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("fireworks-ai/firefunction-v1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("fireworks-ai/firefunction-v1")
function_spec = [
{
"name": "get_stock_price",
"description": "Get the current stock price",
"parameters": {
"type": "object",
"properties": {
"symbol": {
"type": "string",
"description": "The stock symbol, e.g. AAPL, GOOG"
}
},
"required": [
"symbol"
]
}
},
{
"name": "check_word_anagram",
"description": "Check if two words are anagrams of each other",
"parameters": {
"type": "object",
"properties": {
"word1": {
"type": "string",
"description": "The first word"
},
"word2": {
"type": "string",
"description": "The second word"
}
},
"required": [
"word1",
"word2"
]
}
}
]
functions = json.dumps(function_spec, indent=4)
messages = [
{'role': 'functions', 'content': functions},
{'role': 'system', 'content': 'You are a helpful assistant with access to functions. Use them if required.'},
{'role': 'user', 'content': 'Hi, can you tell me the current stock price of AAPL?'}
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
generated_ids = model.generate(model_inputs, max_new_tokens=128)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Demo App
Check our easy-to-extend [demo chat app](https://github.com/fw-ai/forge/tree/main/apps/functional_chat) with function calling capabilities built on Firefunction model.
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/64b6f3a72f5a966b9722de88/A2rFnYxM9xGCc_LiZeXe7.mp4"></video>
|
furrutiav/bert_qa_extractor_2022_ulra_by_kmeans_Q_nllf_ef_plus_nllf_v0_best_by_mixtral_v2_signal_it_118 | furrutiav | 2024-03-06T21:22:33Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-06T21:21:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sarak7/H15_36_769_v1 | sarak7 | 2024-03-06T21:19:20Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T21:17:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mk7756/ru_egov_mistral-7b-instruct_prompt_2 | mk7756 | 2024-03-06T21:12:07Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T21:09:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JCrasby/Ashy | JCrasby | 2024-03-06T21:11:13Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-06T21:11:13Z | ---
license: creativeml-openrail-m
---
|
sarak7/H14_36_769_v1 | sarak7 | 2024-03-06T21:03:27Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T21:01:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phobos-deimos/distilbert-base-uncased-finetuned-emotion | phobos-deimos | 2024-03-06T21:01:34Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T20:28:37Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9246482204954132
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2123
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7978 | 1.0 | 250 | 0.2972 | 0.912 | 0.9110 |
| 0.2399 | 2.0 | 500 | 0.2123 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
OwOOwO/exp4 | OwOOwO | 2024-03-06T20:56:05Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T20:53:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Maqqq/OpenHermes-2.5-Mistral-7B-4 | Maqqq | 2024-03-06T20:55:28Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T20:22:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
joseagmz/TinyLlama-preprocess-medtext-epochs-1-lr-0002 | joseagmz | 2024-03-06T20:54:51Z | 94 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T20:52:41Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
tags:
- generated_from_trainer
model-index:
- name: TinyLlama-preprocess-medtext-epochs-1-lr-0002
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
adapter: null
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
bf16: auto
dataset_prepared_path: last_run_prepared
datasets:
- path: utrgvseniorproject/medtext
type: completion
debug: null
deepspeed: null
early_stopping_patience: null
eval_sample_packing: false
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
flash_attn_cross_entropy: false
flash_attn_fuse_mlp: true
flash_attn_fuse_qkv: false
flash_attn_rms_norm: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: false
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: null
lora_dropout: null
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: null
lora_target_linear: null
lr_scheduler: cosine
micro_batch_size: 1
model_type: LlamaForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: ./TinyLlama-preprocess-medtext-epochs-1-lr-0002
pad_to_sequence_len: true
resume_from_checkpoint: null
sample_packing: true
saves_per_epoch: 1
sequence_len: 2048
special_tokens: null
strict: false
tf32: false
tokenizer_type: LlamaTokenizer
train_on_inputs: false
val_set_size: 0.05
wandb_entity: utrgvmedai
wandb_log_model: null
wandb_name: tinyLama_colab_test_4
wandb_project: TinyLlama-preprocess-medtext-epochs-1-lr-0002
wandb_watch: null
warmup_steps: 100
weight_decay: 0.1
xformers_attention: null
```
</details><br>
# TinyLlama-preprocess-medtext-epochs-1-lr-0002
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7582 | 0.0 | 1 | 2.1282 |
| 2.6905 | 0.25 | 155 | 4.0796 |
| 2.9887 | 0.5 | 310 | 2.8330 |
| 2.6398 | 0.75 | 465 | 2.7038 |
| 1.7458 | 1.0 | 620 | 2.6325 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
|
guilhermebastos96/speecht5_finetuned_female_globo_add_token | guilhermebastos96 | 2024-03-06T20:52:18Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-03-06T04:47:58Z | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_female_globo_add_token
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_female_globo_add_token
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4216 | 7.14 | 1000 | 0.3775 |
| 0.4003 | 14.27 | 2000 | 0.3635 |
| 0.3868 | 21.41 | 3000 | 0.3513 |
| 0.3863 | 28.55 | 4000 | 0.3475 |
| 0.3737 | 35.68 | 5000 | 0.3444 |
| 0.3753 | 42.82 | 6000 | 0.3439 |
| 0.3736 | 49.96 | 7000 | 0.3421 |
| 0.3719 | 57.09 | 8000 | 0.3419 |
| 0.3686 | 64.23 | 9000 | 0.3419 |
| 0.3694 | 71.36 | 10000 | 0.3416 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
allstax/Bodhi-unstable-v0.0.1-GGUF | allstax | 2024-03-06T20:49:23Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-05T04:19:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmed807762/gemma-2b-vetdataset-finetuned | ahmed807762 | 2024-03-06T20:48:34Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T20:45:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Maaz66/mistral_python_instruct_140_rows_QnA_Rating | Maaz66 | 2024-03-06T20:42:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-03-06T20:41:48Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0 |
nrbhole/layoutxlm-finetuned-xfund-fr-1 | nrbhole | 2024-03-06T20:35:16Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfund-custom",
"base_model:nrbhole/layoutxlm-finetuned-xfund-fr",
"base_model:finetune:nrbhole/layoutxlm-finetuned-xfund-fr",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-06T19:51:35Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfund-custom
base_model: nrbhole/layoutxlm-finetuned-xfund-fr
model-index:
- name: layoutxlm-finetuned-xfund-fr-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-fr-1
This model is a fine-tuned version of [nrbhole/layoutxlm-finetuned-xfund-fr](https://huggingface.co/nrbhole/layoutxlm-finetuned-xfund-fr) on the xfund-custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2000
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
nrbhole/layoutxlm-finetuned-xfund-fr | nrbhole | 2024-03-06T20:34:59Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfund-custom",
"base_model:microsoft/layoutxlm-base",
"base_model:finetune:microsoft/layoutxlm-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-06T19:10:35Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfund-custom
base_model: microsoft/layoutxlm-base
model-index:
- name: layoutxlm-finetuned-xfund-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-fr
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfund-custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
OwOOwO/exp2 | OwOOwO | 2024-03-06T20:33:29Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T20:31:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sanbongazin/willgpt-neox-small_v2 | sanbongazin | 2024-03-06T20:31:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation",
"ja",
"dataset:sanbongazin/WilladgeArticle",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-05T17:02:54Z | ---
language:
- ja
license: mit
library_name: transformers
datasets:
- sanbongazin/WilladgeArticle
metrics:
- accuracy
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Delview/Taxi-v3 | Delview | 2024-03-06T20:29:45Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T20:21:17Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Delview/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
furrutiav/bert_qa_extractor_2022_ulra_by_question_type_ef_plus_nllf_best_by_mixtral_v2_signal_it_87 | furrutiav | 2024-03-06T20:23:35Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-06T20:22:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shiroyaksha/distilgpt2-finetuned | Shiroyaksha | 2024-03-06T20:22:53Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:adapter:distilbert/distilgpt2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-06T05:49:18Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: distilgpt2
model-index:
- name: distilgpt2-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4702 | 1.0 | 3359 | 2.2441 |
| 2.2416 | 2.0 | 6718 | 2.0566 |
| 2.156 | 3.0 | 10077 | 1.9883 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2 |
kuyesu22/ll-avatar | kuyesu22 | 2024-03-06T20:20:14Z | 0 | 0 | null | [
"llava",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2023-10-27T10:05:15Z | ---
license: mit
tags:
- llava
pipeline_tag: image-text-to-text
---
|
Delview/q-FrozenLake-v1-4x4-noSlippery | Delview | 2024-03-06T20:18:10Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T20:18:08Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Delview/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ndieckow/LunarLander-ppo | ndieckow | 2024-03-06T20:15:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T20:15:38Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.50 +/- 17.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pivovalera2012/Llama-2-7b-Dr-House_v2 | pivovalera2012 | 2024-03-06T20:15:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-04T18:52:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alpindale/llava-1.5-7b | alpindale | 2024-03-06T20:13:18Z | 19 | 2 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"image-text-to-text",
"autotrain_compatible",
"region:us"
] | image-text-to-text | 2023-12-22T13:03:31Z | ---
tags:
- llava
inference: false
pipeline_tag: image-text-to-text
---
<br>
<br>
# LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-v1.5-7B was trained in September 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 450K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
## Evaluation dataset
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs. |
mobidic/solar-10b-platypus-lora | mobidic | 2024-03-06T20:11:14Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T19:47:56Z | ---
library_name: transformers
tags: []
license: cc-by-nc-nd-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** mobidic
- **Model type:** language generation
- **License:** cc-by-nc-nd-4.0
- **Finetuned from model [optional]:** solar-10B
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** mobidic/solar-10b-platypus-lora
|
Thomstr/ppo-Huggy | Thomstr | 2024-03-06T20:06:52Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-03-06T18:52:43Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Thomstr/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
Mursel/mistralai-Code-Instruct-Finetune-test | Mursel | 2024-03-06T20:05:51Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T19:58:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
glacio-dev/Qwen1.5-7B-Chat-Q4 | glacio-dev | 2024-03-06T20:04:46Z | 4 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"en",
"license:other",
"region:us"
] | text-generation | 2024-03-06T19:31:12Z | ---
language:
- en
license: other
tags:
- chat
- mlx
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
---
# glacio-dev/Qwen1.5-7B-Chat-Q4
This model was converted to MLX format from [`Qwen/Qwen1.5-7B-Chat`]().
Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("glacio-dev/Qwen1.5-7B-Chat-Q4")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
SjardiWillems/distilbert-base-uncased-finetuned-cola | SjardiWillems | 2024-03-06T19:58:45Z | 23 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-03T12:09:14Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8256
- Matthews Correlation: 0.5339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5191 | 1.0 | 535 | 0.4707 | 0.4775 |
| 0.3443 | 2.0 | 1070 | 0.4830 | 0.5068 |
| 0.2403 | 3.0 | 1605 | 0.6290 | 0.5238 |
| 0.1795 | 4.0 | 2140 | 0.7628 | 0.5123 |
| 0.1326 | 5.0 | 2675 | 0.8256 | 0.5339 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
farzanrahmani/NLP_HF_Workshop_HW | farzanrahmani | 2024-03-06T19:58:34Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"text-classification",
"en",
"dataset:dair-ai/emotion",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T19:05:06Z | ---
license: mit
datasets:
- dair-ai/emotion
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
---
NLP workshop HW: Introduction to Hugging Face and working with it. |
iMahdiGhazavi/NLP_HF_Workshop | iMahdiGhazavi | 2024-03-06T19:43:20Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"Sentiment Analysis",
"IMDB",
"en",
"dataset:imdb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-04T08:28:34Z | ---
datasets:
- imdb
language:
- en
library_name: transformers
pipeline_tag: text-classification
tags:
- Sentiment Analysis
- IMDB
--- |
Shahid04/RandomDataModelq22 | Shahid04 | 2024-03-06T19:41:34Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"blenderbot",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-06T19:39:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jjovalle99/llama7bit-lora-sql | jjovalle99 | 2024-03-06T19:40:27Z | 1 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-03-05T04:42:35Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
datasets:
- generator
model-index:
- name: llama7bit-lora-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama7bit-lora-sql
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1399
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2068 | 0.06 | 20 | 0.8181 |
| 0.6757 | 0.12 | 40 | 0.5148 |
| 0.5104 | 0.17 | 60 | 0.4552 |
| 0.4633 | 0.23 | 80 | 0.4269 |
| 0.442 | 0.29 | 100 | 0.4110 |
| 0.428 | 0.35 | 120 | 0.3993 |
| 0.4209 | 0.41 | 140 | 0.3983 |
| 0.4142 | 0.47 | 160 | 0.3932 |
| 0.4032 | 0.52 | 180 | 0.3888 |
| 0.3999 | 0.58 | 200 | 0.3841 |
| 0.3977 | 0.64 | 220 | 0.3827 |
| 0.397 | 0.7 | 240 | 0.3811 |
| 0.3927 | 0.76 | 260 | 0.3781 |
| 0.3873 | 0.82 | 280 | 0.3762 |
| 0.3871 | 0.87 | 300 | 0.3728 |
| 0.3861 | 0.93 | 320 | 0.3715 |
| 0.3809 | 0.99 | 340 | 0.3695 |
| 0.3664 | 1.05 | 360 | 0.3700 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2 |
jjovalle99/llama7b-ft-lora-sql-v2 | jjovalle99 | 2024-03-06T19:40:24Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T19:39:08Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_ef_plus_nllf_v0_best_by_mixtral_v2_signal_it_147 | furrutiav | 2024-03-06T19:22:08Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-06T18:51:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
completelyboofyblitzed/Mistral-7B-Instruct-v0.2lr_1.25e-06_lora_alpha_8_r_16_wd_0.001_warmup_ratio_0.3_ep_116-tuned | completelyboofyblitzed | 2024-03-06T19:16:49Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-03-06T19:12:42Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MehdiHosseiniMoghadam/AVA-Gemma-7B-V2 | MehdiHosseiniMoghadam | 2024-03-06T19:14:17Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T19:08:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
motionsh/Q-Taxi-v3 | motionsh | 2024-03-06T19:14:06Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T19:14:04Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="motionsh/Q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JoseLuis95/social-behavior-emotions | JoseLuis95 | 2024-03-06T19:13:29Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"simplification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T14:16:10Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- simplification
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: social-behavior-emotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# social-behavior-emotions
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2910
- Precision: 0.9233
- Recall: 0.9175
- F1: 0.9190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 1.5671 | 1.0 | 800 | 0.5302 | 0.8664 | 0.8469 | 0.8502 |
| 0.4396 | 2.0 | 1600 | 0.3693 | 0.9118 | 0.9 | 0.9019 |
| 0.3103 | 3.0 | 2400 | 0.2973 | 0.9230 | 0.9125 | 0.9146 |
| 0.1984 | 4.0 | 3200 | 0.3098 | 0.9191 | 0.9125 | 0.9132 |
| 0.1283 | 5.0 | 4000 | 0.2910 | 0.9233 | 0.9175 | 0.9190 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
RichardErkhov/MobiLlama-1B-Chat-gguf | RichardErkhov | 2024-03-06T19:10:37Z | 76 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-02-25T19:23:48Z | !! Hello everyone, model is not working, it is an experimental attempt to quantize it.
I understood the error, but Im facing it too. Im a bit unexperienced in this. If someone knows how to manually set the layers size please help. Thank you!
GGUF quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Linkedin](https://www.linkedin.com/in/richard-erkhov/)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MobiLlama-1B-Chat - GGUF
- Model creator: https://huggingface.co/MBZUAI/
- Original model: https://huggingface.co/MBZUAI/MobiLlama-1B-Chat/
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ---- |
| [MobiLlama-1B-Chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q2_K.gguf) | Q2_K | 2 | 0.47GB | significant quality loss - not recommended for most purposes |
| [MobiLlama-1B-Chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q3_K_S.gguf) | Q3_K_S | 3 | 0.53GB | very small, high quality loss |
| [MobiLlama-1B-Chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q3_K_M.gguf) | Q3_K_M | 3 | 0.59GB | very small, high quality loss |
| [MobiLlama-1B-Chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q3_K_L.gguf) | Q3_K_L | 3 | 0.63GB | small, substantial quality loss |
| [MobiLlama-1B-Chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q4_0.gguf) | Q4_0 | 4 | 0.68GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MobiLlama-1B-Chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q4_K_S.gguf) | Q4_K_S | 4 | 0.68GB | small, greater quality loss |
| [MobiLlama-1B-Chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q4_K_M.gguf) | Q4_K_M | 4 | 0.72GB | medium, balanced quality - recommended |
| [MobiLlama-1B-Chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q5_0.gguf) | Q5_0 | 5 | 0.82GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MobiLlama-1B-Chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q5_K_S.gguf) | Q5_K_S | 5 | 0.82GB | large, low quality loss - recommended |
| [MobiLlama-1B-Chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q5_K_M.gguf) | Q5_K_M | 5 | 0.84GB | large, very low quality loss - recommended |
| [MobiLlama-1B-Chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q6_K.gguf) | Q6_K | 6 | 0.96GB | very large, extremely low quality loss |
| [MobiLlama-1B-Chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/MobiLlama-1B-Chat-gguf/blob/main/MobiLlama-1B-Chat.Q8_0.gguf) | Q8_0 | 8 | 1.25GB | very large, extremely low quality loss - not recommended |
Original model description:
---
license: apache-2.0
datasets:
- WizardLM/WizardLM_evol_instruct_V2_196k
- icybee/share_gpt_90k_v1
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# MobiLlama-1B-Chat
We present MobiLlama-1.2B-Chat, an instruction following model finetuned on [MBZUAI/MobiLlama-1B](https://huggingface.co/MBZUAI/MobiLlama-1B).
## Model Description
- **Model type:** Language model with the same architecture as LLaMA-7B
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Resources for more information:**
- [Metrics](https://github.com/LLM360/Analysis360)
- [Finetuning Code](https://github.com/lm-sys/FastChat)
# Loading MobiLlama-1B-Chat
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MBZUAI/MobiLlama-1B-Chat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("MBZUAI/MobiLlama-1B-Chat", trust_remote_code=True)
#template adapated from fastchat
template= "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n### Human: Got any creative ideas for a 10 year oldβs birthday?\n### Assistant: Of course! Here are some creative ideas for a 10-year-old's birthday party:\n1. Treasure Hunt: Organize a treasure hunt in your backyard or nearby park. Create clues and riddles for the kids to solve, leading them to hidden treasures and surprises.\n2. Science Party: Plan a science-themed party where kids can engage in fun and interactive experiments. You can set up different stations with activities like making slime, erupting volcanoes, or creating simple chemical reactions.\n3. Outdoor Movie Night: Set up a backyard movie night with a projector and a large screen or white sheet. Create a cozy seating area with blankets and pillows, and serve popcorn and snacks while the kids enjoy a favorite movie under the stars.\n4. DIY Crafts Party: Arrange a craft party where kids can unleash their creativity. Provide a variety of craft supplies like beads, paints, and fabrics, and let them create their own unique masterpieces to take home as party favors.\n5. Sports Olympics: Host a mini Olympics event with various sports and games. Set up different stations for activities like sack races, relay races, basketball shooting, and obstacle courses. Give out medals or certificates to the participants.\n6. Cooking Party: Have a cooking-themed party where the kids can prepare their own mini pizzas, cupcakes, or cookies. Provide toppings, frosting, and decorating supplies, and let them get hands-on in the kitchen.\n7. Superhero Training Camp: Create a superhero-themed party where the kids can engage in fun training activities. Set up an obstacle course, have them design their own superhero capes or masks, and organize superhero-themed games and challenges.\n8. Outdoor Adventure: Plan an outdoor adventure party at a local park or nature reserve. Arrange activities like hiking, nature scavenger hunts, or a picnic with games. Encourage exploration and appreciation for the outdoors.\nRemember to tailor the activities to the birthday child's interests and preferences. Have a great celebration!\n### Human: {prompt}\n### Assistant:"
prompt = "What are the psychological effects of urban living on mental health?"
input_str = template.format(prompt=prompt)
input_ids = tokenizer(input_str, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())
```
Alternatively, you may use [FastChat](https://github.com/lm-sys/FastChat):
```bash
python3 -m fastchat.serve.cli --model-path MBZUAI/MobiLlama-1B-Chat
```
## Hyperparameters
| Hyperparameter | Value |
| ----------- | ----------- |
| Total Parameters | 1.2B |
| Hidden Size | 2048 |
| Intermediate Size (MLPs) | 5632 |
| Number of Attention Heads | 32 |
| Number of Hidden Lyaers | 22 |
| RMSNorm Ι | 1e^-5 |
| Max Seq Length | 2048 |
| Vocab Size | 32000 |
| Training Hyperparameter | Value |
| ----------- | ----------- |
| learning_rate | 2e-5 |
| num_train_epochs | 3 |
| per_device_train_batch_size | 2 |
| gradient_accumulation_steps | 16 |
| warmup_ratio | 0.04 |
| model_max_length | 2048 |
## Evaluation
| Evaluation Benchmark | MobiLlama-05B-Chat | MobiLlama-1.2B-Chat |
| ----------- | ----------- | ----------- |
| HellaSwag | 0.5042 | 0.6244 |
| MMLU | 0.2677 | 0.2635 |
| Arc Challenge | 0.2935 | 0.3558 |
| TruthfulQA | 0.3997 | 0.3848 |
| CrowsPairs | 0.5694 | 0.679 |
| PIQA | 0.7078 | 0.7557 |
| Race | 0.3320 | 0.3598 |
| SIQA | 0.4165 | 0.4396 |
| Winogrande | 0.5659 | 0.5966 |
## Intended Uses
Given the nature of the training data, the MobiLlama-1B model is best suited for prompts using the QA format, the chat format, and the code format.
## Citation
|
rdp99/roberta-base-sst2-finetuned-emotion | rdp99 | 2024-03-06T19:09:55Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T19:09:27Z | ---
license: mit
base_model: WillHeld/roberta-base-sst2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-sst2-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-sst2-finetuned-emotion
This model is a fine-tuned version of [WillHeld/roberta-base-sst2](https://huggingface.co/WillHeld/roberta-base-sst2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2058
- Accuracy: 0.9381
- F1: 0.9381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1387 | 1.0 | 109 | 0.1822 | 0.9415 | 0.9415 |
| 0.0826 | 2.0 | 218 | 0.2058 | 0.9381 | 0.9381 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Weni/ZeroShot-3.3.28-Mistral-7b-Multilanguage-3.2.0 | Weni | 2024-03-06T19:07:41Z | 1 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-06T14:59:14Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: ZeroShot-3.3.28-Mistral-7b-Multilanguage-3.2.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZeroShot-3.3.28-Mistral-7b-Multilanguage-3.2.0
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.111 | 0.06 | 100 | 0.1583 |
| 0.1678 | 0.12 | 200 | 0.1279 |
| 0.1345 | 0.19 | 300 | 0.1216 |
| 0.1432 | 0.25 | 400 | 0.1087 |
| 0.1136 | 0.31 | 500 | 0.1330 |
| 0.1208 | 0.37 | 600 | 0.1074 |
| 0.0972 | 0.43 | 700 | 0.1033 |
| 0.115 | 0.5 | 800 | 0.0860 |
| 0.0946 | 0.56 | 900 | 0.0953 |
| 0.0702 | 0.62 | 1000 | 0.0731 |
| 0.0671 | 0.68 | 1100 | 0.0645 |
| 0.0679 | 0.74 | 1200 | 0.0604 |
| 0.0632 | 0.81 | 1300 | 0.0558 |
| 0.0492 | 0.87 | 1400 | 0.0529 |
| 0.048 | 0.93 | 1500 | 0.0510 |
| 0.0488 | 0.99 | 1600 | 0.0501 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Abdelkareem/xlm-roberta-base-finetuned-panx-all | Abdelkareem | 2024-03-06T19:05:11Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-06T18:52:59Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2197
- F1: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3862 | 1.0 | 835 | 0.2675 | 0.8030 |
| 0.2084 | 2.0 | 1670 | 0.2236 | 0.8380 |
| 0.137 | 3.0 | 2505 | 0.2197 | 0.8595 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
deepnet/SN6-30M1 | deepnet | 2024-03-06T19:01:52Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-28T17:46:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Abdelkareem/xlm-roberta-base-finetuned-panx-it | Abdelkareem | 2024-03-06T18:51:27Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-06T18:49:43Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2544
- F1: 0.8248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6888 | 1.0 | 70 | 0.3117 | 0.7361 |
| 0.2626 | 2.0 | 140 | 0.2461 | 0.8073 |
| 0.1745 | 3.0 | 210 | 0.2544 | 0.8248 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
kojack7/roberta-large-lora-token-classification-240306trainplusvalplustest_FINAL | kojack7 | 2024-03-06T18:46:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T18:46:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SyntaxTheRed/a2c-PandaReachDense-v3 | SyntaxTheRed | 2024-03-06T18:40:17Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T18:36:03Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Yelinz/ppo-ict-scouts-moon-landung | Yelinz | 2024-03-06T18:39:21Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T18:39:04Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -164.93 +/- 70.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Litzy619/V0305O7 | Litzy619 | 2024-03-06T18:25:50Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-03-06T00:49:56Z | ---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0305O7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0305O7
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4766 | 0.09 | 10 | 0.6952 |
| 0.4255 | 0.17 | 20 | 0.1685 |
| 0.17 | 0.26 | 30 | 0.1509 |
| 0.1517 | 0.34 | 40 | 0.1541 |
| 0.1513 | 0.43 | 50 | 0.1504 |
| 0.1572 | 0.51 | 60 | 0.1512 |
| 0.1522 | 0.6 | 70 | 0.1504 |
| 0.153 | 0.68 | 80 | 0.1493 |
| 0.1494 | 0.77 | 90 | 0.1491 |
| 0.1525 | 0.85 | 100 | 0.1494 |
| 0.1543 | 0.94 | 110 | 0.1495 |
| 0.1505 | 1.02 | 120 | 0.1497 |
| 0.1544 | 1.11 | 130 | 0.1511 |
| 0.1498 | 1.19 | 140 | 0.1510 |
| 0.1554 | 1.28 | 150 | 0.1513 |
| 0.1547 | 1.37 | 160 | 0.1552 |
| 0.153 | 1.45 | 170 | 0.1489 |
| 0.1501 | 1.54 | 180 | 0.1497 |
| 0.1541 | 1.62 | 190 | 0.1498 |
| 0.1535 | 1.71 | 200 | 0.1486 |
| 0.1519 | 1.79 | 210 | 0.1521 |
| 0.1554 | 1.88 | 220 | 0.1506 |
| 0.1568 | 1.96 | 230 | 0.1498 |
| 0.1513 | 2.05 | 240 | 0.1501 |
| 0.1546 | 2.13 | 250 | 0.1487 |
| 0.1513 | 2.22 | 260 | 0.1494 |
| 0.1497 | 2.3 | 270 | 0.1491 |
| 0.155 | 2.39 | 280 | 0.1484 |
| 0.1531 | 2.47 | 290 | 0.1486 |
| 0.1518 | 2.56 | 300 | 0.1490 |
| 0.1509 | 2.65 | 310 | 0.1492 |
| 0.1523 | 2.73 | 320 | 0.1496 |
| 0.1505 | 2.82 | 330 | 0.1495 |
| 0.1504 | 2.9 | 340 | 0.1495 |
| 0.1514 | 2.99 | 350 | 0.1494 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
OmarHaroon01/compressed_byt5_small_finetune_summarize | OmarHaroon01 | 2024-03-06T18:24:07Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-06T18:23:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kojack7/roberta-large-lora-token-classification-240306trainplusval_testmetric | kojack7 | 2024-03-06T18:21:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T18:21:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
StaAhmed/QA_distell0 | StaAhmed | 2024-03-06T18:13:59Z | 26 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:StaAhmed/QA_distell",
"base_model:finetune:StaAhmed/QA_distell",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-03-06T16:45:42Z | ---
license: apache-2.0
base_model: StaAhmed/QA_distell
tags:
- generated_from_trainer
model-index:
- name: QA_distell0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_distell0
This model is a fine-tuned version of [StaAhmed/QA_distell](https://huggingface.co/StaAhmed/QA_distell) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 41 | 0.2368 |
| No log | 2.0 | 82 | 0.2410 |
| No log | 3.0 | 123 | 0.2006 |
| No log | 4.0 | 164 | 0.1666 |
| No log | 5.0 | 205 | 0.1845 |
| No log | 6.0 | 246 | 0.1868 |
| No log | 7.0 | 287 | 0.2041 |
| No log | 8.0 | 328 | 0.2036 |
| No log | 9.0 | 369 | 0.2284 |
| No log | 10.0 | 410 | 0.2175 |
| No log | 11.0 | 451 | 0.2337 |
| No log | 12.0 | 492 | 0.2221 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
haturusinghe/sinhala_off_finetuned_completions_llama2_7b_class_head | haturusinghe | 2024-03-06T18:13:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T18:13:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
panos-span/Pixelcopter-PLE-v0 | panos-span | 2024-03-06T18:11:54Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T16:24:31Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.40 +/- 22.78
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rAIfle/Sloppy-Wingman-8x7B-exl2-rpcal | rAIfle | 2024-03-06T18:02:56Z | 0 | 0 | null | [
"mergekit",
"merge",
"region:us"
] | null | 2024-02-22T16:03:34Z | ---
base_model: []
tags:
- mergekit
- merge
---
### Note
fifth try is the charm on this quant, it seems. been getting too many issues with runpod lately, it's a huge pain to have to rerun a bunch of crap because the pods lose internet and a bunch of other issues.
### On with the show!
Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset.
Branches:
- `main` -- `measurement.json`
- `2.25b6h` -- 2.25bpw, 6bit lm_head
- `3.5b6h` -- 3.5bpw, 6bit lm_head
- `6b6h` -- 6bpw, 6bit lm_head
Requires ExllamaV2 version 0.0.12 and up.
Original model link: [rAIfle/Sloppy-Wingman-8x7B-hf](https://huggingface.co/rAIfle/Sloppy-Wingman-8x7B-hf)
Original model README below.
***
# Sloppy-Wingman-8x7B-hf

Big slop, good model.
Running better at slightly higher temp (1.1-ish) than usual, along with 0.05 MinP and 0.28 snoot.
Bog-standard ChatML works best imo, but Alpaca and Mixtral formats work (to some degree) too.
Parts:
```yaml
models:
- model: mistralai/Mixtral-8x7B-v0.1+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
parameters:
weight: 0.33
- model: mistralai/Mixtral-8x7B-v0.1+wandb/Mixtral-8x7b-Remixtral
parameters:
weight: 0.33
merge_method: task_arithmetic
base_model: mistralai/Mixtral-8x7B-v0.1
dtype: float16
```
and
```yaml
models:
- model: mistralai/Mixtral-8x7B-Instruct-v0.1+/ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
parameters:
weight: 0.85
- model: notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES
parameters:
weight: 0.25
- model: ycros/BagelWorldTour-8x7B
parameters:
weight: 0.1
merge_method: task_arithmetic
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
dtype: float16
```
SLERP:ed together as per below.
---
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* ./02-friend2-instruct
* ./01-friend2-base
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ./01-friend2-base
- model: ./02-friend2-instruct
merge_method: slerp
base_model: ./01-friend2-base
parameters:
t:
- value: 0.5
dtype: float16
```
|
Trelis/llava-v1.6-mistral-7b-PATCHED | Trelis | 2024-03-06T18:00:10Z | 18 | 8 | transformers | [
"transformers",
"safetensors",
"llava",
"text-generation",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | image-text-to-text | 2024-02-14T11:36:42Z | ---
license: apache-2.0
tags:
- llava
inference: false
pipeline_tag: image-text-to-text
---
<br>
<br>
# LLaVA Model Card - PATCHED!
This is a patched version of the original model, with patches from aliencaocao applied from [here](https://github.com/haotian-liu/LLaVA/pull/1115).
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
Base LLM: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
**Model date:**
LLaVA-v1.6-Mistral-7B was trained in December 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
[mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) license.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
## Evaluation dataset
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs. |
tonyassi/gucci-ss18-fashion-lora | tonyassi | 2024-03-06T17:59:37Z | 4 | 4 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-02-13T19:57:10Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Gucci SS18 style
license: openrail++
---
# SDXL LoRA DreamBooth - tonyassi/gucci-ss18-fashion-lora
by [Tony Assi](https://www.tonyassi.com/)
Dreambooth Lora style based on the [Gucci SS18](https://www.vogue.com/fashion-shows/spring-2018-ready-to-wear/gucci) collection.

## Trigger words
Use **Gucci SS18 style** in the prompt to trigger the style.
## How to use
```bash
pip install diffusers accelerate
```
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
# Load the pipeline
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae,
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
pipe.load_lora_weights("tonyassi/gucci-ss18-fashion-lora")
pipe.to("cuda")
# Generate image
prompt = "Gucci SS18 style, megan fox wearing a gold mesh dress with crystals"
image = pipe(prompt=prompt,
height=1024,
width=1024,
num_inference_steps=50,
negative_prompt="ugly, deformed face, deformed body").images[0]
image
```
## Examples

**Gucci SS18 style, marilyn monroe wearing elegant dress made of crystals**

**Gucci SS18 style, young nicole kidman wearing a pink sequins leotard, 70s, glam rock**

**Gucci SS18 style, david bowie, suit, 70s, glam rock**

**Gucci SS18 style, young nicole kidman wearing a pink sequins leotard, stocking, 70s, glam rock**

**Gucci SS18 style, david bowie**
## Model description
These are tonyassi/gucci-ss18-fashion-lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Download model
Weights for this model are available in Safetensors format.
[Download](https://huggingface.co/tonyassi/gucci-ss18-fashion-lora/tree/main) them in the Files & versions tab.
|
tonyassi/mcqueen-fw09-fashion-lora | tonyassi | 2024-03-06T17:57:38Z | 6 | 2 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-02-19T18:03:40Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: McQueen FW09 style
license: openrail++
---
# SDXL LoRA DreamBooth - tonyassi/mcqueen-fw09-fashion-lora
by [Tony Assi](https://www.tonyassi.com/)
Dreambooth Lora style based on the [McQueen FW09](https://www.vogue.com/fashion-shows/fall-2009-ready-to-wear/alexander-mcqueen) collection.

## Trigger words
Use **McQueen FW09 style** in the prompt to trigger the style.
## How to use
```bash
pip install diffusers accelerate
```
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
# Load the pipeline
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae,
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
pipe.load_lora_weights("tonyassi/mcqueen-fw09-fashion-lora")
pipe.to("cuda")
# Generate image
prompt = "McQueen FW09 style, megan fox wearing a gold mesh dress with crystals"
image = pipe(prompt=prompt,
height=1024,
width=1024,
num_inference_steps=50,
negative_prompt="ugly, deformed face, deformed body").images[0]
image
```
## Examples

**McQueen FW09 style, Bella Hadid wearing a dress made of flamingo feathers**

**McQueen FW09 style, Willem Dafoe**

**McQueen FW09 style, Bella Hadid, futuristic**

**McQueen FW09 style, Bella Hadid wearing a dress made of flamingo feathers**

**McQueen FW09 style, Bella Hadid, futuristic**

**McQueen FW09 style, Willem Dafoe wearing fur coat**
## Model description
These are tonyassi/mcqueen-fw09-fashion-lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Download model
Weights for this model are available in Safetensors format.
[Download](https://huggingface.co/tonyassi/mcqueen-fw09-fashion-lora/tree/main) them in the Files & versions tab.
|
rishabhjain16/whisper-large-v2_to_cv_colab | rishabhjain16 | 2024-03-06T17:55:24Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_Albanian",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-06T01:58:04Z | ---
language:
- hi
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_Albanian
metrics:
- wer
model-index:
- name: Whisper large-v2 Albanian Test
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 16 Albanian
type: mozilla-foundation/common_voice_11_Albanian
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 34.05295315682281
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper large-v2 Test
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 16 Albanian dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7073
- Wer: 34.0530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1135 | 4.63 | 500 | 0.6519 | 44.8880 |
| 0.02 | 9.26 | 1000 | 0.6575 | 39.3483 |
| 0.0075 | 13.89 | 1500 | 0.6073 | 35.6823 |
| 0.0016 | 18.52 | 2000 | 0.6347 | 34.9084 |
| 0.0008 | 23.15 | 2500 | 0.6484 | 34.9491 |
| 0.0001 | 27.78 | 3000 | 0.6765 | 34.4196 |
| 0.0001 | 32.41 | 3500 | 0.6897 | 33.9308 |
| 0.0001 | 37.04 | 4000 | 0.6988 | 34.1752 |
| 0.0001 | 41.67 | 4500 | 0.7048 | 33.9715 |
| 0.0001 | 46.3 | 5000 | 0.7073 | 34.0530 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tonyassi/tony-assi-lora-1 | tonyassi | 2024-03-06T17:54:00Z | 70 | 5 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-03-03T07:45:20Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Tony Assi style
license: openrail++
---
# SDXL LoRA DreamBooth - tonyassi/tony-assi-lora-1
by [Tony Assi](https://www.tonyassi.com/)
Dreambooth Lora style based on the [Tony Assi](https://www.tonyassi.com/fashion) collection.
## Trigger words
Use **Tony Assi style** in the prompt to trigger the style.
## How to use
```bash
pip install diffusers accelerate
```
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
# Load the pipeline
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae,
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
pipe.load_lora_weights("tonyassi/tony-assi-lora-1")
pipe.to("cuda")
# Generate image
prompt = "Tony Assi style, megan fox wearing a gold mesh dress with crystals"
image = pipe(prompt=prompt,
height=1024,
width=1024,
num_inference_steps=50,
negative_prompt="ugly, deformed face, deformed body").images[0]
image
```
## Examples

**Tony Assi style, hunter schafer wearing a mint green mesh outfit with puffy sleeves, white background**

**Tony Assi style, Kendall Jenner wearing a black mesh outfit with puffy black sleeves, white background**

**Tony Assi style, eva mendes wearing clear vinyl outfit, white background**

**Tony Assi style, Kendall Jenner wearing a black mesh outfit with puffy black sleeves, white background**

**Tony Assi style, Angelina Jolie wearing black mesh outfit, white background**

**Tony Assi style, Bella Hadid wearing white mesh, white background**

**Tony Assi style, hunter schafer wearing a mint green mesh outfit with puffy sleeves, white background**
## Model description
These are tonyassi/tony-assi-lora-1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Download model
Weights for this model are available in Safetensors format.
[Download](https://huggingface.co/tonyassi/tony-assi-lora-1/tree/main) them in the Files & versions tab.
|
Litzy619/V0305O6 | Litzy619 | 2024-03-06T17:51:39Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-03-06T04:13:15Z | ---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0305O6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0305O6
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0963 | 0.09 | 10 | 0.4238 |
| 0.2106 | 0.17 | 20 | 0.1603 |
| 0.1608 | 0.26 | 30 | 0.1534 |
| 0.1522 | 0.34 | 40 | 0.1544 |
| 0.1519 | 0.43 | 50 | 0.1499 |
| 0.1577 | 0.51 | 60 | 0.1505 |
| 0.1523 | 0.6 | 70 | 0.1517 |
| 0.1535 | 0.68 | 80 | 0.1489 |
| 0.1495 | 0.77 | 90 | 0.1492 |
| 0.1612 | 0.85 | 100 | 0.1536 |
| 0.1575 | 0.94 | 110 | 0.1505 |
| 0.1516 | 1.02 | 120 | 0.1505 |
| 0.1545 | 1.11 | 130 | 0.1515 |
| 0.1499 | 1.19 | 140 | 0.1506 |
| 0.1526 | 1.28 | 150 | 0.1508 |
| 0.1529 | 1.37 | 160 | 0.1515 |
| 0.152 | 1.45 | 170 | 0.1493 |
| 0.1494 | 1.54 | 180 | 0.1503 |
| 0.1538 | 1.62 | 190 | 0.1500 |
| 0.1533 | 1.71 | 200 | 0.1496 |
| 0.1518 | 1.79 | 210 | 0.1502 |
| 0.1543 | 1.88 | 220 | 0.1506 |
| 0.1563 | 1.96 | 230 | 0.1498 |
| 0.1513 | 2.05 | 240 | 0.1499 |
| 0.1544 | 2.13 | 250 | 0.1491 |
| 0.151 | 2.22 | 260 | 0.1500 |
| 0.1496 | 2.3 | 270 | 0.1497 |
| 0.155 | 2.39 | 280 | 0.1488 |
| 0.153 | 2.47 | 290 | 0.1487 |
| 0.1517 | 2.56 | 300 | 0.1492 |
| 0.151 | 2.65 | 310 | 0.1495 |
| 0.1523 | 2.73 | 320 | 0.1496 |
| 0.1505 | 2.82 | 330 | 0.1496 |
| 0.1502 | 2.9 | 340 | 0.1497 |
| 0.1515 | 2.99 | 350 | 0.1496 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
kojack7/roberta-large-lora-token-classification-240306trainwithval | kojack7 | 2024-03-06T17:50:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:50:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
danlund4/lunarmodel | danlund4 | 2024-03-06T17:50:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T17:14:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.62 +/- 18.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Crataco/TinyDolphin-2.8-1.1b-imatrix-GGUF | Crataco | 2024-03-06T17:48:51Z | 157 | 6 | null | [
"gguf",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:teknium/openhermes",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T05:03:16Z | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- teknium/openhermes
language:
- en
---
TinyDolphin-2.8-1.1b, available in as many GGUF quantization levels as possible as of March 5th, 2024. [Kalomaze's "groups_merged.txt"](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) was used for the importance matrix, with context set to 2,048.
For a non-imatrix version, see [tsunemoto/TinyDolphin-2.8-1.1b-GGUF](https://huggingface.co/tsunemoto/TinyDolphin-2.8-1.1b-GGUF).
Original model card below.
***
# TinyDolphin-2.8-1.1b

Join Our Discord! https://discord.gg/cognitivecomputations
This is an experimental model trained on 2 3090's by Kearm on the new Dolphin 2.8 dataset by Eric Hartford https://erichartford.com/dolphin π¬
## Example Outputs
Invent a sinister prank involving teleportation that leaves the victim utterly confused.
```
As you requested, here's how I would go about creating this sinister prank:
First, we need to find a location where teleportation is possible. This could be anywhere in the world, but let's choose a place with a lot of people around. For example, a crowded street or a busy park.
Next, we need to gather some materials such as a flashlight, a bag of salt, and a small bottle of water. These items are essential for making the prank work.
Now, we need to carefully position ourselves near the chosen location. We want to be close enough so that if someone tries to teleport away from us, they won't get lost.
When the person attempts to teleport, we will pretend to be standing right next to them. This will make it look like they were teleported without our knowing.
With the flashlight, we will light up the area around us. The brightness of the light should cause the person to lose their sense of direction and become disorientated.
Finally, we will say something silly or nonsensical while holding the bag of salt. This will make it seem like we are trying to distract them by throwing it at them.
When the person finally loses their way, we will disappear into thin air. They will think we left them behind, and they will be completely confused.
```
Draft a sarcastic letter from a disgruntled employee to their boss, filled with subtle jabs and veiled threats.
```
Dear [Boss],
I hope this message finds you in good health and high spirits. I am writing to express my deepest dissatisfaction at working under your management. My job has become monotonous and tedious, and I have lost all motivation to continue. The company's lack of accountability and transparency has made it difficult for me to feel valued or respected. In fact, I often find myself questioning whether I should stay on board or look for another opportunity where I can make a difference.
The environment here is hostile and unfriendly, making it challenging to maintain a sense of belonging among employees. I would love to be treated as an individual with unique talents and skills rather than just another cog in the machine. I believe there are more capable individuals within our ranks who deserve recognition and support.
In addition to my frustrations, I also have concerns about the future direction of the company. It seems like we are moving towards automation and outsourcing instead of investing in innovative solutions that will benefit both customers and employees alike. This approach does not align well with my personal values and beliefs.
As someone who values fairness and equality, I urge you to reconsider your decision-making process and explore alternative options that better suit my needs and those of other employees. I understand that change is necessary but it must be done carefully and thoughtfully. Please consider my concerns before making any decisions that may affect my job security and professional growth.
Thank you for your time and consideration. I look forward to discussing these matters further when we meet next.
Sincerely,
[Your Name]
```
Construct a poem describing the rise and rule of succesful malevolent king who ruled through fear and manipulation.
```
In his reign, he ruled with ease, his every move dictated by fear. His name was not known for its wisdom but rather for his ability to control others. He had a dark heart, consumed by malice and power.
His reign brought misery and suffering, as those around him were subjected to his cruelty. His oppression was unmatched, as he held people in check without mercy. Yet, despite all this, he remained invincible, his rule never faltering.
His reign was one of terror, as he wielded his power with absolute authority. His subjects lived in fear, knowing they could be executed at any moment. Yet, they submitted, oblivious to their plight. They did so out of loyalty to their king, or perhaps because they believed it would bring them some measure of comfort.
Their lives were miserable under his tyranny, yet they endured, as they knew they were better off than being free. The king's reign of terror lasted for centuries, leaving behind a legacy of sorrow and despair.
```
Support my efforts! ko-fi.com/kearm
# Orignal Model Card Below
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ππ. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| TinyLlama-1.1B-intermediate-step-1195k-2.5T | 2.5T | 58.96 | 34.40 | 58.72 | 31.91 | 56.78 | 63.21 | 73.07 | 53.86|
| TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99| |
ai-anytime/MedPaxTral-2x7b | ai-anytime | 2024-03-06T17:46:16Z | 16 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"medical",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T15:08:21Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
---
A medical MoEs developed through the amalgamation of three leading models in the medical domain: BioMistral, Meditron, and Medalpaca. This fusion has been meticulously achieved using the MergeKit library, a cutting-edge tool designed to blend multiple models' strengths into a unified, powerful LLM. |
AwAppp/benchmarks_8bit_batch_size45 | AwAppp | 2024-03-06T17:44:15Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:44:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AwAppp/benchmarks_8bit_batch_size40 | AwAppp | 2024-03-06T17:42:11Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:42:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AwAppp/benchmarks_8bit_batch_size35 | AwAppp | 2024-03-06T17:40:09Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:40:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mpasila/gpt3-finnish-8B-gptq-4bit | mpasila | 2024-03-06T17:38:03Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"fi",
"arxiv:2203.02155",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-03-06T17:29:31Z | ---
language:
- fi
pipeline_tag: text-generation
license: apache-2.0
---
GPTQ quantization of [TurkuNLP/gpt3-finnish-8B](https://huggingface.co/TurkuNLP/gpt3-finnish-8B/).
Using the following settings:
```
quantization_config = GPTQConfig(
bits=4,
group_size=128,
dataset="wikitext2",
desc_act=False,
)
```
# Original Model card:
Generative Pretrained Transformer with 8B parameteres for Finnish.
TurkuNLP Finnish GPT-3-models are a model family of pretrained monolingual GPT-style language models that are based on BLOOM-architecture.
Note that the models are pure language models, meaning that they are not [instruction finetuned](https://arxiv.org/abs/2203.02155) for dialogue
or answering questions.
These models are intended to be used as foundational models that can be e.g. instruction finetuned to serve as modern chat-models.
All models are trained for 300B tokens.
**Parameters**
| Model | Layers | Dim | Heads | Params |
|--------|--------|------|-------|--------|
| Small | 12 | 768 | 12 | 186M |
| Medium | 24 | 1024 | 16 | 437M |
| Large | 24 | 1536 | 16 | 881M |
| XL | 24 | 2064 | 24 | 1.5B |
| β3Bβ | 32 | 2560 | 32 | 2.8B |
| β8Bβ | 32 | 4096 | 32 | 7.5B |
| "13B" | 40 | 5120 | 40 | 13.3B |
**Datasets**
We used a combination of multiple Finnish resources.
* Finnish Internet Parsebank https://turkunlp.org/finnish_nlp.html
mC4 multilingual colossal, cleaned Common Crawl https://huggingface.co/datasets/mc4
* Common Crawl Finnish https://TODO
* Finnish Wikipedia https://fi.wikipedia.org/wiki
* LΓΆnnrot Projekti LΓΆnnrot http://www.lonnrot.net/
* ePub National library βepubβ collection
* National library βlehdetβ collection
* Suomi24 The Suomi 24 Corpus 2001-2020 http://urn.fi/urn:nbn:fi:lb-2021101527
* Reddit r/Suomi submissions and comments https://www.reddit.com/r/Suomi
* STT Finnish News Agency Archive 1992-2018 http://urn.fi/urn:nbn:fi:lb-2019041501
* Yle Finnish News Archive 2011-2018 http://urn.fi/urn:nbn:fi:lb-2017070501
* Yle Finnish News Archive 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050401
* Yle News Archive Easy-to-read Finnish 2011-2018 http://urn.fi/urn:nbn:fi:lb-2019050901
* Yle News Archive Easy-to-read Finnish 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050701
* ROOTS TODO
**Sampling ratios**
|Dataset | Chars | Ratio | Weight | W.Ratio |
|----------|--------|---------|--------|---------|
|Parsebank | 35.0B | 16.9\% | 1.5 | 22.7\%|
|mC4-Fi | 46.3B | 22.4\% | 1.0 | 20.0\%|
|CC-Fi | 79.6B | 38.5\% | 1.0 | 34.4\%|
|Fiwiki | 0.8B | 0.4\% | 3.0 | 1.0\%|
|LΓΆnnrot | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Yle | 1.6B | 0.8\% | 2.0 | 1.4\%|
|STT | 2.2B | 1.1\% | 2.0 | 1.9\%|
|ePub | 13.5B | 6.5\% | 1.0 | 5.8\%|
|Lehdet | 5.8B | 2.8\% | 1.0 | 2.5\%|
|Suomi24 | 20.6B | 9.9\% | 1.0 | 8.9\%|
|Reddit-Fi | 0.7B | 0.4\% | 1.0 | 0.3\%|
|**TOTAL** | **207.0B** | **100.0\%** | **N/A** | **100.0\%** |
More documentation and a paper coming soon. |
AwAppp/benchmarks_8bit_batch_size15 | AwAppp | 2024-03-06T17:31:58Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:31:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AwAppp/benchmarks_8bit_batch_size10 | AwAppp | 2024-03-06T17:29:55Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:29:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
digiplay/CamelliaMIx_2.5D_v1_VAE | digiplay | 2024-03-06T17:27:34Z | 928 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-06T17:14:43Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
CamelliaMIx_2.5D_v1 + 840000VAE
Original Author's Civitai.com Page:
https://civitai.com/models/44219?modelVersionId=78938
|
AwAppp/benchmark_original_batch_size45 | AwAppp | 2024-03-06T17:26:50Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:26:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sebasmos/vit-base-patch16-224-in21k-finetuned-lora-test | sebasmos | 2024-03-06T17:26:23Z | 6 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:adapter:google/vit-base-patch16-224-in21k",
"region:us"
] | null | 2023-12-29T15:24:19Z | ---
library_name: peft
base_model: google/vit-base-patch16-224-in21k
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
AwAppp/benchmark_original_batch_size40 | AwAppp | 2024-03-06T17:25:48Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:25:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AwAppp/benchmark_original_batch_size35 | AwAppp | 2024-03-06T17:24:46Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:24:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AwAppp/benchmark_original_batch_size30 | AwAppp | 2024-03-06T17:23:43Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:23:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AwAppp/benchmark_original_batch_size10 | AwAppp | 2024-03-06T17:19:34Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:19:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AwAppp/benchmark_original_batch_size5 | AwAppp | 2024-03-06T17:18:32Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T17:18:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SinkableVirus/test | SinkableVirus | 2024-03-06T17:04:12Z | 0 | 0 | null | [
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T17:04:05Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
sujayC66/t5-base-finetuned-stocknews_1900_100 | sujayC66 | 2024-03-06T16:58:57Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-06T09:52:56Z | ---
license: apache-2.0
base_model: google-t5/t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-stocknews_1900_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-stocknews_1900_100
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4554
- Rouge1: 40.9735
- Rouge2: 36.4343
- Rougel: 40.1125
- Rougelsum: 40.3384
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 211 | 0.7350 | 31.7308 | 20.2914 | 28.657 | 29.3167 | 18.9596 |
| No log | 2.0 | 422 | 0.6345 | 33.1681 | 22.6637 | 30.5277 | 31.1213 | 19.0 |
| 0.9162 | 3.0 | 633 | 0.5706 | 34.6997 | 24.847 | 32.2288 | 32.8098 | 19.0 |
| 0.9162 | 4.0 | 844 | 0.5268 | 35.4092 | 26.2862 | 33.1822 | 33.6119 | 19.0 |
| 0.6423 | 5.0 | 1055 | 0.4858 | 36.1444 | 27.7265 | 34.1005 | 34.4616 | 19.0 |
| 0.6423 | 6.0 | 1266 | 0.4560 | 36.7437 | 28.449 | 34.6735 | 35.1349 | 19.0 |
| 0.6423 | 7.0 | 1477 | 0.4323 | 37.33 | 29.5265 | 35.4853 | 35.9323 | 19.0 |
| 0.5063 | 8.0 | 1688 | 0.4142 | 37.1593 | 29.6064 | 35.4064 | 35.8123 | 19.0 |
| 0.5063 | 9.0 | 1899 | 0.3991 | 38.1553 | 30.5752 | 36.2114 | 36.7167 | 19.0 |
| 0.4102 | 10.0 | 2110 | 0.3864 | 38.3045 | 31.2785 | 36.6248 | 36.9254 | 19.0 |
| 0.4102 | 11.0 | 2321 | 0.3789 | 38.2719 | 31.5007 | 36.7926 | 37.0642 | 19.0 |
| 0.3415 | 12.0 | 2532 | 0.3703 | 38.8466 | 32.1912 | 37.3333 | 37.6131 | 19.0 |
| 0.3415 | 13.0 | 2743 | 0.3618 | 38.6865 | 32.2025 | 37.2779 | 37.5144 | 19.0 |
| 0.3415 | 14.0 | 2954 | 0.3522 | 39.3257 | 33.1793 | 38.0203 | 38.2379 | 19.0 |
| 0.2912 | 15.0 | 3165 | 0.3508 | 39.4422 | 33.4813 | 38.2943 | 38.4649 | 19.0 |
| 0.2912 | 16.0 | 3376 | 0.3506 | 39.8056 | 34.1172 | 38.6625 | 38.8293 | 19.0 |
| 0.2453 | 17.0 | 3587 | 0.3519 | 39.9209 | 34.5123 | 38.9012 | 39.0863 | 19.0 |
| 0.2453 | 18.0 | 3798 | 0.3498 | 40.1987 | 34.8962 | 39.2082 | 39.3708 | 19.0 |
| 0.216 | 19.0 | 4009 | 0.3544 | 39.6724 | 34.2613 | 38.6566 | 38.7859 | 19.0 |
| 0.216 | 20.0 | 4220 | 0.3539 | 40.1049 | 34.8915 | 39.0681 | 39.2354 | 19.0 |
| 0.216 | 21.0 | 4431 | 0.3561 | 40.0241 | 34.6788 | 38.9621 | 39.112 | 19.0 |
| 0.186 | 22.0 | 4642 | 0.3548 | 40.144 | 34.8856 | 39.1343 | 39.3265 | 19.0 |
| 0.186 | 23.0 | 4853 | 0.3564 | 40.3022 | 35.2446 | 39.3555 | 39.5398 | 19.0 |
| 0.1626 | 24.0 | 5064 | 0.3575 | 40.2556 | 35.1322 | 39.2923 | 39.4501 | 19.0 |
| 0.1626 | 25.0 | 5275 | 0.3655 | 40.4588 | 35.4231 | 39.5008 | 39.6855 | 19.0 |
| 0.1626 | 26.0 | 5486 | 0.3687 | 40.3751 | 35.4048 | 39.4194 | 39.6334 | 19.0 |
| 0.1463 | 27.0 | 5697 | 0.3636 | 40.5556 | 35.6104 | 39.646 | 39.8315 | 19.0 |
| 0.1463 | 28.0 | 5908 | 0.3724 | 40.6704 | 35.7873 | 39.645 | 39.8934 | 19.0 |
| 0.1291 | 29.0 | 6119 | 0.3721 | 40.7764 | 35.9434 | 39.8896 | 40.0641 | 19.0 |
| 0.1291 | 30.0 | 6330 | 0.3767 | 40.6911 | 35.868 | 39.7979 | 40.0009 | 19.0 |
| 0.115 | 31.0 | 6541 | 0.3776 | 40.5145 | 35.7139 | 39.6426 | 39.814 | 19.0 |
| 0.115 | 32.0 | 6752 | 0.3752 | 40.6776 | 35.8839 | 39.7995 | 39.9986 | 19.0 |
| 0.115 | 33.0 | 6963 | 0.3793 | 40.5806 | 35.7407 | 39.6819 | 39.8721 | 19.0 |
| 0.1051 | 34.0 | 7174 | 0.3871 | 40.652 | 35.8792 | 39.7158 | 39.9167 | 19.0 |
| 0.1051 | 35.0 | 7385 | 0.3828 | 40.8275 | 36.0878 | 39.9195 | 40.1043 | 19.0 |
| 0.095 | 36.0 | 7596 | 0.3886 | 40.9392 | 36.2701 | 40.0753 | 40.2416 | 19.0 |
| 0.095 | 37.0 | 7807 | 0.3908 | 40.6987 | 35.9383 | 39.8522 | 40.0252 | 19.0 |
| 0.0864 | 38.0 | 8018 | 0.3937 | 40.9136 | 36.1533 | 40.0212 | 40.1877 | 19.0 |
| 0.0864 | 39.0 | 8229 | 0.3979 | 40.5823 | 35.9301 | 39.7841 | 39.9357 | 19.0 |
| 0.0864 | 40.0 | 8440 | 0.3971 | 40.9144 | 36.1874 | 40.036 | 40.2312 | 19.0 |
| 0.0812 | 41.0 | 8651 | 0.4008 | 40.8206 | 36.1899 | 40.0098 | 40.185 | 19.0 |
| 0.0812 | 42.0 | 8862 | 0.4007 | 40.6012 | 35.8957 | 39.7683 | 39.932 | 19.0 |
| 0.0747 | 43.0 | 9073 | 0.4001 | 40.8324 | 36.0613 | 39.9346 | 40.119 | 19.0 |
| 0.0747 | 44.0 | 9284 | 0.4057 | 40.8783 | 36.0747 | 39.9939 | 40.1931 | 19.0 |
| 0.0747 | 45.0 | 9495 | 0.4026 | 40.9583 | 36.2066 | 40.1362 | 40.3269 | 19.0 |
| 0.0689 | 46.0 | 9706 | 0.4132 | 40.6396 | 36.0119 | 39.8226 | 40.0266 | 19.0 |
| 0.0689 | 47.0 | 9917 | 0.4092 | 40.8679 | 36.2276 | 40.0419 | 40.2269 | 19.0 |
| 0.0643 | 48.0 | 10128 | 0.4131 | 41.0975 | 36.4785 | 40.2175 | 40.4088 | 19.0 |
| 0.0643 | 49.0 | 10339 | 0.4142 | 41.084 | 36.4548 | 40.1774 | 40.3793 | 19.0 |
| 0.0599 | 50.0 | 10550 | 0.4162 | 41.0003 | 36.4144 | 40.0912 | 40.3021 | 19.0 |
| 0.0599 | 51.0 | 10761 | 0.4201 | 41.123 | 36.4406 | 40.2193 | 40.4498 | 19.0 |
| 0.0599 | 52.0 | 10972 | 0.4185 | 41.1181 | 36.4871 | 40.2354 | 40.4111 | 19.0 |
| 0.0563 | 53.0 | 11183 | 0.4183 | 41.0662 | 36.471 | 40.2436 | 40.4196 | 19.0 |
| 0.0563 | 54.0 | 11394 | 0.4222 | 40.9644 | 36.3705 | 40.0994 | 40.2857 | 19.0 |
| 0.053 | 55.0 | 11605 | 0.4219 | 41.0366 | 36.4104 | 40.2024 | 40.3756 | 19.0 |
| 0.053 | 56.0 | 11816 | 0.4238 | 40.9543 | 36.2944 | 40.0546 | 40.2509 | 19.0 |
| 0.0502 | 57.0 | 12027 | 0.4260 | 40.8299 | 36.173 | 39.9556 | 40.1762 | 19.0 |
| 0.0502 | 58.0 | 12238 | 0.4281 | 40.7226 | 36.0612 | 39.8837 | 40.0788 | 19.0 |
| 0.0502 | 59.0 | 12449 | 0.4281 | 40.8293 | 36.1924 | 39.9873 | 40.1796 | 19.0 |
| 0.0466 | 60.0 | 12660 | 0.4276 | 40.8576 | 36.1387 | 40.0215 | 40.2374 | 19.0 |
| 0.0466 | 61.0 | 12871 | 0.4311 | 41.0218 | 36.4164 | 40.1375 | 40.3726 | 19.0 |
| 0.0462 | 62.0 | 13082 | 0.4310 | 41.006 | 36.333 | 40.1393 | 40.3476 | 19.0 |
| 0.0462 | 63.0 | 13293 | 0.4343 | 41.0375 | 36.2933 | 40.1381 | 40.3135 | 19.0 |
| 0.0423 | 64.0 | 13504 | 0.4315 | 41.004 | 36.2703 | 40.0982 | 40.31 | 19.0 |
| 0.0423 | 65.0 | 13715 | 0.4346 | 41.0361 | 36.3826 | 40.1206 | 40.3346 | 19.0 |
| 0.0423 | 66.0 | 13926 | 0.4381 | 40.8662 | 36.347 | 40.0537 | 40.2147 | 19.0 |
| 0.0405 | 67.0 | 14137 | 0.4383 | 41.0513 | 36.4805 | 40.1781 | 40.397 | 19.0 |
| 0.0405 | 68.0 | 14348 | 0.4373 | 40.9528 | 36.3512 | 40.0602 | 40.2812 | 19.0 |
| 0.0398 | 69.0 | 14559 | 0.4385 | 40.9879 | 36.3848 | 40.1668 | 40.3769 | 19.0 |
| 0.0398 | 70.0 | 14770 | 0.4414 | 40.9653 | 36.4555 | 40.1602 | 40.3589 | 19.0 |
| 0.0398 | 71.0 | 14981 | 0.4433 | 41.0236 | 36.5146 | 40.1889 | 40.4139 | 19.0 |
| 0.0378 | 72.0 | 15192 | 0.4423 | 40.9979 | 36.3904 | 40.1236 | 40.3669 | 19.0 |
| 0.0378 | 73.0 | 15403 | 0.4435 | 41.0081 | 36.4075 | 40.1324 | 40.3675 | 19.0 |
| 0.0361 | 74.0 | 15614 | 0.4423 | 41.0208 | 36.4193 | 40.1883 | 40.4144 | 19.0 |
| 0.0361 | 75.0 | 15825 | 0.4449 | 40.9626 | 36.3828 | 40.1797 | 40.3773 | 19.0 |
| 0.0354 | 76.0 | 16036 | 0.4479 | 40.9415 | 36.3803 | 40.1269 | 40.3357 | 19.0 |
| 0.0354 | 77.0 | 16247 | 0.4464 | 41.0229 | 36.5098 | 40.2163 | 40.4094 | 19.0 |
| 0.0354 | 78.0 | 16458 | 0.4464 | 40.9558 | 36.413 | 40.1258 | 40.3388 | 19.0 |
| 0.0345 | 79.0 | 16669 | 0.4465 | 40.9385 | 36.3516 | 40.0814 | 40.3247 | 19.0 |
| 0.0345 | 80.0 | 16880 | 0.4531 | 41.0034 | 36.4385 | 40.1536 | 40.3875 | 19.0 |
| 0.0332 | 81.0 | 17091 | 0.4492 | 41.0399 | 36.4823 | 40.1741 | 40.4126 | 19.0 |
| 0.0332 | 82.0 | 17302 | 0.4486 | 41.065 | 36.5245 | 40.2065 | 40.4218 | 19.0 |
| 0.0326 | 83.0 | 17513 | 0.4512 | 40.9513 | 36.3926 | 40.0856 | 40.3274 | 19.0 |
| 0.0326 | 84.0 | 17724 | 0.4515 | 40.9202 | 36.3954 | 40.0657 | 40.2837 | 19.0 |
| 0.0326 | 85.0 | 17935 | 0.4504 | 40.9972 | 36.518 | 40.1999 | 40.4031 | 19.0 |
| 0.0319 | 86.0 | 18146 | 0.4533 | 40.9467 | 36.391 | 40.1257 | 40.3422 | 19.0 |
| 0.0319 | 87.0 | 18357 | 0.4527 | 40.9682 | 36.4798 | 40.1442 | 40.3529 | 19.0 |
| 0.0306 | 88.0 | 18568 | 0.4544 | 40.9622 | 36.4381 | 40.149 | 40.3599 | 19.0 |
| 0.0306 | 89.0 | 18779 | 0.4549 | 40.9742 | 36.4306 | 40.15 | 40.3669 | 19.0 |
| 0.0306 | 90.0 | 18990 | 0.4531 | 40.9875 | 36.4958 | 40.1809 | 40.3876 | 19.0 |
| 0.031 | 91.0 | 19201 | 0.4551 | 40.9555 | 36.4406 | 40.144 | 40.3408 | 19.0 |
| 0.031 | 92.0 | 19412 | 0.4531 | 40.9665 | 36.4446 | 40.1594 | 40.3673 | 19.0 |
| 0.0299 | 93.0 | 19623 | 0.4544 | 40.9272 | 36.3767 | 40.0731 | 40.2899 | 19.0 |
| 0.0299 | 94.0 | 19834 | 0.4549 | 40.9021 | 36.3566 | 40.0557 | 40.2726 | 19.0 |
| 0.0291 | 95.0 | 20045 | 0.4544 | 40.9254 | 36.3759 | 40.0779 | 40.2962 | 19.0 |
| 0.0291 | 96.0 | 20256 | 0.4546 | 40.9254 | 36.3759 | 40.0779 | 40.2962 | 19.0 |
| 0.0291 | 97.0 | 20467 | 0.4551 | 40.9465 | 36.3891 | 40.0831 | 40.3071 | 19.0 |
| 0.0299 | 98.0 | 20678 | 0.4553 | 40.9465 | 36.3891 | 40.0831 | 40.3071 | 19.0 |
| 0.0299 | 99.0 | 20889 | 0.4554 | 40.9465 | 36.3891 | 40.0831 | 40.3071 | 19.0 |
| 0.0292 | 100.0 | 21100 | 0.4554 | 40.9735 | 36.4343 | 40.1125 | 40.3384 | 19.0 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
LarryAIDraw/Char-HonkaiSR-BlackSwan-V1 | LarryAIDraw | 2024-03-06T16:54:18Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-06T16:51:31Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/336959/black-swan-or-honkai-star-rail |
chosenone80/arabert-ner-test-2 | chosenone80 | 2024-03-06T16:53:34Z | 44 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-06T16:53:13Z | ---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_keras_callback
model-index:
- name: arabert-ner-test-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# arabert-ner-test-2
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.38.1
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
peldrak/segformer-b4-ade-finetuned-grCoastline | peldrak | 2024-03-06T16:50:33Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/segformer-b4-finetuned-ade-512-512",
"base_model:finetune:nvidia/segformer-b4-finetuned-ade-512-512",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-03-06T16:09:19Z | ---
license: other
base_model: nvidia/segformer-b4-finetuned-ade-512-512
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b4-ade-finetuned-grCoastline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b4-ade-finetuned-grCoastline
This model is a fine-tuned version of [nvidia/segformer-b4-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b4-finetuned-ade-512-512) on the peldrak/grCoastline_512 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2116
- Mean Iou: 0.7207
- Mean Accuracy: 0.7864
- Overall Accuracy: 0.9441
- Accuracy Water: 0.9915
- Accuracy Whitewater: 0.0
- Accuracy Sediment: 0.9083
- Accuracy Other Natural Terrain: 0.8596
- Accuracy Vegetation: 0.8771
- Accuracy Development: 0.8701
- Accuracy Unknown: 0.9983
- Iou Water: 0.9580
- Iou Whitewater: 0.0
- Iou Sediment: 0.8691
- Iou Other Natural Terrain: 0.7108
- Iou Vegetation: 0.8236
- Iou Development: 0.6878
- Iou Unknown: 0.9957
- F1 Score: 0.9437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Whitewater | Accuracy Sediment | Accuracy Other Natural Terrain | Accuracy Vegetation | Accuracy Development | Accuracy Unknown | Iou Water | Iou Whitewater | Iou Sediment | Iou Other Natural Terrain | Iou Vegetation | Iou Development | Iou Unknown | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:-------------------:|:-----------------:|:------------------------------:|:-------------------:|:--------------------:|:----------------:|:---------:|:--------------:|:------------:|:-------------------------:|:--------------:|:---------------:|:-----------:|:--------:|
| 1.4532 | 0.24 | 20 | 1.3435 | 0.4211 | 0.5264 | 0.7441 | 0.7775 | 0.0 | 0.2537 | 0.5410 | 0.7502 | 0.3664 | 0.9961 | 0.6995 | 0.0 | 0.2230 | 0.3135 | 0.4349 | 0.3008 | 0.9758 | 0.7366 |
| 1.0467 | 0.49 | 40 | 0.8901 | 0.4799 | 0.5609 | 0.8306 | 0.9261 | 0.0 | 0.6581 | 0.3109 | 0.9513 | 0.0809 | 0.9992 | 0.8704 | 0.0 | 0.5553 | 0.3049 | 0.5627 | 0.0804 | 0.9860 | 0.8109 |
| 0.9494 | 0.73 | 60 | 0.6358 | 0.5209 | 0.5972 | 0.8680 | 0.9788 | 0.0 | 0.8070 | 0.4142 | 0.9622 | 0.0220 | 0.9964 | 0.9044 | 0.0 | 0.6693 | 0.4008 | 0.6578 | 0.0220 | 0.9921 | 0.8452 |
| 0.6118 | 0.98 | 80 | 0.5235 | 0.5454 | 0.6191 | 0.8834 | 0.9836 | 0.0 | 0.8558 | 0.5071 | 0.9635 | 0.0283 | 0.9951 | 0.9172 | 0.0 | 0.6969 | 0.4748 | 0.7090 | 0.0282 | 0.9914 | 0.8625 |
| 0.5458 | 1.22 | 100 | 0.4739 | 0.5846 | 0.6514 | 0.8875 | 0.9514 | 0.0 | 0.8431 | 0.5217 | 0.9639 | 0.2829 | 0.9967 | 0.9258 | 0.0 | 0.7593 | 0.4726 | 0.6684 | 0.2728 | 0.9930 | 0.8793 |
| 0.5935 | 1.46 | 120 | 0.3786 | 0.6090 | 0.6793 | 0.9055 | 0.9774 | 0.0 | 0.9195 | 0.7271 | 0.8854 | 0.2475 | 0.9980 | 0.9221 | 0.0 | 0.7820 | 0.5782 | 0.7474 | 0.2415 | 0.9921 | 0.8969 |
| 0.616 | 1.71 | 140 | 0.3630 | 0.6382 | 0.7043 | 0.9141 | 0.9845 | 0.0 | 0.9106 | 0.7205 | 0.8929 | 0.4243 | 0.9975 | 0.9132 | 0.0 | 0.7515 | 0.6445 | 0.7799 | 0.3859 | 0.9920 | 0.9090 |
| 0.6891 | 1.95 | 160 | 0.3563 | 0.6249 | 0.6897 | 0.9049 | 0.9912 | 0.0 | 0.8575 | 0.5156 | 0.9541 | 0.5139 | 0.9955 | 0.9010 | 0.0 | 0.7908 | 0.4895 | 0.7447 | 0.4577 | 0.9906 | 0.8978 |
| 0.3915 | 2.2 | 180 | 0.3039 | 0.6420 | 0.7094 | 0.9122 | 0.9690 | 0.0 | 0.8829 | 0.7779 | 0.8809 | 0.4559 | 0.9990 | 0.9296 | 0.0 | 0.8211 | 0.5859 | 0.7445 | 0.4225 | 0.9901 | 0.9093 |
| 0.478 | 2.44 | 200 | 0.2957 | 0.6661 | 0.7380 | 0.9201 | 0.9884 | 0.0 | 0.8179 | 0.8009 | 0.8854 | 0.6765 | 0.9973 | 0.9150 | 0.0 | 0.7697 | 0.6348 | 0.7986 | 0.5532 | 0.9916 | 0.9186 |
| 0.2891 | 2.68 | 220 | 0.2687 | 0.6781 | 0.7439 | 0.9258 | 0.9890 | 0.0 | 0.9161 | 0.7389 | 0.8977 | 0.6776 | 0.9881 | 0.9253 | 0.0 | 0.8384 | 0.6330 | 0.7898 | 0.5726 | 0.9874 | 0.9238 |
| 0.3432 | 2.93 | 240 | 0.2629 | 0.6725 | 0.7379 | 0.9246 | 0.9752 | 0.0 | 0.9444 | 0.7407 | 0.8825 | 0.6246 | 0.9976 | 0.9308 | 0.0 | 0.8389 | 0.6136 | 0.7804 | 0.5506 | 0.9935 | 0.9224 |
| 0.4379 | 3.17 | 260 | 0.2528 | 0.6787 | 0.7360 | 0.9283 | 0.9837 | 0.0 | 0.9170 | 0.7116 | 0.9280 | 0.6139 | 0.9974 | 0.9422 | 0.0 | 0.8382 | 0.6218 | 0.7870 | 0.5687 | 0.9931 | 0.9255 |
| 0.2129 | 3.41 | 280 | 0.2250 | 0.6775 | 0.7401 | 0.9264 | 0.9911 | 0.0 | 0.9001 | 0.6653 | 0.9301 | 0.7006 | 0.9936 | 0.9330 | 0.0 | 0.8456 | 0.6021 | 0.7849 | 0.5853 | 0.9918 | 0.9237 |
| 0.1941 | 3.66 | 300 | 0.2191 | 0.6943 | 0.7621 | 0.9336 | 0.9770 | 0.0 | 0.9426 | 0.8365 | 0.8619 | 0.7193 | 0.9974 | 0.9377 | 0.0 | 0.8296 | 0.6934 | 0.8076 | 0.5981 | 0.9938 | 0.9325 |
| 0.3357 | 3.9 | 320 | 0.2315 | 0.6841 | 0.7582 | 0.9267 | 0.9782 | 0.0 | 0.8671 | 0.7910 | 0.8801 | 0.7948 | 0.9961 | 0.9322 | 0.0 | 0.8089 | 0.6298 | 0.8011 | 0.6240 | 0.9926 | 0.9261 |
| 0.2456 | 4.15 | 340 | 0.2462 | 0.6904 | 0.7586 | 0.9303 | 0.9895 | 0.0 | 0.9229 | 0.6965 | 0.8897 | 0.8146 | 0.9970 | 0.9339 | 0.0 | 0.8563 | 0.6091 | 0.7915 | 0.6476 | 0.9947 | 0.9284 |
| 0.3712 | 4.39 | 360 | 0.2366 | 0.6918 | 0.7696 | 0.9307 | 0.9848 | 0.0 | 0.8491 | 0.8504 | 0.8689 | 0.8389 | 0.9951 | 0.9514 | 0.0 | 0.8085 | 0.6517 | 0.8102 | 0.6272 | 0.9935 | 0.9311 |
| 0.4034 | 4.63 | 380 | 0.2442 | 0.6854 | 0.7691 | 0.9254 | 0.9799 | 0.0 | 0.9505 | 0.8457 | 0.7848 | 0.8289 | 0.9940 | 0.9361 | 0.0 | 0.8287 | 0.6650 | 0.7533 | 0.6215 | 0.9930 | 0.9248 |
| 0.1794 | 4.88 | 400 | 0.2162 | 0.6951 | 0.7555 | 0.9338 | 0.9831 | 0.0 | 0.8868 | 0.7703 | 0.9249 | 0.7271 | 0.9967 | 0.9449 | 0.0 | 0.8392 | 0.6450 | 0.8101 | 0.6320 | 0.9945 | 0.9324 |
| 0.3036 | 5.12 | 420 | 0.2356 | 0.6869 | 0.7514 | 0.9303 | 0.9889 | 0.0 | 0.9184 | 0.6697 | 0.9145 | 0.7701 | 0.9983 | 0.9289 | 0.0 | 0.8442 | 0.6064 | 0.8075 | 0.6280 | 0.9932 | 0.9277 |
| 0.1948 | 5.37 | 440 | 0.2138 | 0.6963 | 0.7661 | 0.9333 | 0.9801 | 0.0 | 0.8889 | 0.8187 | 0.8823 | 0.7944 | 0.9982 | 0.9443 | 0.0 | 0.8363 | 0.6453 | 0.8165 | 0.6381 | 0.9939 | 0.9330 |
| 0.5839 | 5.61 | 460 | 0.2248 | 0.6998 | 0.7699 | 0.9350 | 0.9913 | 0.0 | 0.9310 | 0.7673 | 0.8695 | 0.8340 | 0.9966 | 0.9330 | 0.0 | 0.8511 | 0.6645 | 0.8143 | 0.6420 | 0.9936 | 0.9338 |
| 0.2039 | 5.85 | 480 | 0.2311 | 0.6949 | 0.7695 | 0.9300 | 0.9801 | 0.0 | 0.8510 | 0.8609 | 0.8619 | 0.8362 | 0.9967 | 0.9417 | 0.0 | 0.8106 | 0.6365 | 0.8076 | 0.6740 | 0.9942 | 0.9304 |
| 0.1818 | 6.1 | 500 | 0.2297 | 0.7037 | 0.7794 | 0.9338 | 0.9788 | 0.0 | 0.9420 | 0.8915 | 0.8103 | 0.8374 | 0.9955 | 0.9417 | 0.0 | 0.8651 | 0.6649 | 0.7871 | 0.6730 | 0.9942 | 0.9338 |
| 0.1637 | 6.34 | 520 | 0.1984 | 0.7105 | 0.7757 | 0.9389 | 0.9817 | 0.0 | 0.9503 | 0.8314 | 0.8634 | 0.8062 | 0.9969 | 0.9409 | 0.0 | 0.8601 | 0.6899 | 0.8123 | 0.6756 | 0.9946 | 0.9380 |
| 0.2034 | 6.59 | 540 | 0.1970 | 0.7170 | 0.7771 | 0.9426 | 0.9875 | 0.0 | 0.8953 | 0.8225 | 0.9205 | 0.8200 | 0.9939 | 0.9478 | 0.0 | 0.8602 | 0.7085 | 0.8317 | 0.6781 | 0.9926 | 0.9418 |
| 0.2673 | 6.83 | 560 | 0.2055 | 0.7111 | 0.7800 | 0.9400 | 0.9907 | 0.0 | 0.9323 | 0.8022 | 0.8739 | 0.8651 | 0.9957 | 0.9366 | 0.0 | 0.8675 | 0.6941 | 0.8262 | 0.6591 | 0.9939 | 0.9392 |
| 0.2446 | 7.07 | 580 | 0.2391 | 0.6769 | 0.7454 | 0.9286 | 0.9829 | 0.0 | 0.9409 | 0.9085 | 0.8298 | 0.5581 | 0.9979 | 0.9531 | 0.0 | 0.8470 | 0.6501 | 0.7880 | 0.5054 | 0.9944 | 0.9274 |
| 0.1747 | 7.32 | 600 | 0.2450 | 0.6993 | 0.7642 | 0.9351 | 0.9872 | 0.0 | 0.9200 | 0.7115 | 0.9121 | 0.8223 | 0.9967 | 0.9458 | 0.0 | 0.8690 | 0.6249 | 0.8060 | 0.6552 | 0.9939 | 0.9335 |
| 0.1262 | 7.56 | 620 | 0.2595 | 0.6955 | 0.7652 | 0.9315 | 0.9878 | 0.0 | 0.9453 | 0.7199 | 0.8657 | 0.8406 | 0.9970 | 0.9399 | 0.0 | 0.8588 | 0.6152 | 0.7856 | 0.6745 | 0.9946 | 0.9298 |
| 0.1223 | 7.8 | 640 | 0.2334 | 0.7028 | 0.7704 | 0.9339 | 0.9861 | 0.0 | 0.9214 | 0.8069 | 0.8572 | 0.8234 | 0.9977 | 0.9527 | 0.0 | 0.8672 | 0.6354 | 0.7841 | 0.6854 | 0.9948 | 0.9334 |
| 0.0915 | 8.05 | 660 | 0.2561 | 0.6879 | 0.7660 | 0.9258 | 0.9905 | 0.0 | 0.8363 | 0.8988 | 0.8221 | 0.8176 | 0.9967 | 0.9483 | 0.0 | 0.8053 | 0.6239 | 0.7819 | 0.6615 | 0.9946 | 0.9266 |
| 0.1095 | 8.29 | 680 | 0.2018 | 0.7179 | 0.7813 | 0.9431 | 0.9851 | 0.0 | 0.9306 | 0.7844 | 0.9086 | 0.8650 | 0.9950 | 0.9425 | 0.0 | 0.8742 | 0.6973 | 0.8350 | 0.6831 | 0.9933 | 0.9421 |
| 0.196 | 8.54 | 700 | 0.2125 | 0.7115 | 0.7834 | 0.9392 | 0.9800 | 0.0 | 0.9376 | 0.8437 | 0.8523 | 0.8722 | 0.9979 | 0.9515 | 0.0 | 0.8737 | 0.6855 | 0.8046 | 0.6707 | 0.9947 | 0.9389 |
| 0.1548 | 8.78 | 720 | 0.1893 | 0.7261 | 0.7833 | 0.9467 | 0.9830 | 0.0 | 0.9370 | 0.8452 | 0.9035 | 0.8170 | 0.9973 | 0.9545 | 0.0 | 0.8776 | 0.7329 | 0.8299 | 0.6936 | 0.9944 | 0.9458 |
| 0.1376 | 9.02 | 740 | 0.2467 | 0.7101 | 0.7706 | 0.9394 | 0.9868 | 0.0 | 0.9268 | 0.7615 | 0.9067 | 0.8150 | 0.9977 | 0.9536 | 0.0 | 0.8685 | 0.6624 | 0.8077 | 0.6831 | 0.9952 | 0.9381 |
| 0.1569 | 9.27 | 760 | 0.2038 | 0.7184 | 0.7823 | 0.9432 | 0.9904 | 0.0 | 0.9064 | 0.8209 | 0.8961 | 0.8651 | 0.9976 | 0.9505 | 0.0 | 0.8672 | 0.7048 | 0.8265 | 0.6845 | 0.9952 | 0.9425 |
| 0.1633 | 9.51 | 780 | 0.1953 | 0.7213 | 0.7820 | 0.9443 | 0.9882 | 0.0 | 0.9093 | 0.8436 | 0.8986 | 0.8371 | 0.9975 | 0.9499 | 0.0 | 0.8690 | 0.7129 | 0.8304 | 0.6919 | 0.9950 | 0.9436 |
| 0.1501 | 9.76 | 800 | 0.2605 | 0.7013 | 0.7688 | 0.9352 | 0.9799 | 0.0 | 0.8636 | 0.8899 | 0.8816 | 0.7689 | 0.9975 | 0.9573 | 0.0 | 0.8129 | 0.6623 | 0.8220 | 0.6595 | 0.9950 | 0.9353 |
| 0.2376 | 10.0 | 820 | 0.2071 | 0.7186 | 0.7850 | 0.9438 | 0.9922 | 0.0 | 0.9306 | 0.8279 | 0.8795 | 0.8683 | 0.9962 | 0.9505 | 0.0 | 0.8731 | 0.7145 | 0.8282 | 0.6695 | 0.9947 | 0.9431 |
| 0.0856 | 10.24 | 840 | 0.2190 | 0.7113 | 0.7689 | 0.9410 | 0.9878 | 0.0 | 0.9119 | 0.8175 | 0.9127 | 0.7559 | 0.9964 | 0.9583 | 0.0 | 0.8737 | 0.6801 | 0.8152 | 0.6568 | 0.9948 | 0.9401 |
| 0.2257 | 10.49 | 860 | 0.2133 | 0.7124 | 0.7847 | 0.9402 | 0.9871 | 0.0 | 0.9328 | 0.8304 | 0.8554 | 0.8895 | 0.9980 | 0.9565 | 0.0 | 0.8557 | 0.7005 | 0.8108 | 0.6686 | 0.9948 | 0.9398 |
| 0.2022 | 10.73 | 880 | 0.2517 | 0.7030 | 0.7746 | 0.9355 | 0.9845 | 0.0 | 0.8451 | 0.8379 | 0.8945 | 0.8631 | 0.9973 | 0.9514 | 0.0 | 0.8109 | 0.6656 | 0.8229 | 0.6754 | 0.9949 | 0.9353 |
| 0.0923 | 10.98 | 900 | 0.2118 | 0.7122 | 0.7717 | 0.9416 | 0.9863 | 0.0 | 0.8904 | 0.8660 | 0.9057 | 0.7555 | 0.9979 | 0.9563 | 0.0 | 0.8418 | 0.7016 | 0.8320 | 0.6582 | 0.9952 | 0.9409 |
| 0.1639 | 11.22 | 920 | 0.1961 | 0.7244 | 0.7817 | 0.9471 | 0.9835 | 0.0 | 0.9354 | 0.8432 | 0.9118 | 0.8013 | 0.9967 | 0.9584 | 0.0 | 0.8649 | 0.7337 | 0.8397 | 0.6790 | 0.9951 | 0.9462 |
| 0.0676 | 11.46 | 940 | 0.2238 | 0.7135 | 0.7829 | 0.9421 | 0.9885 | 0.0 | 0.9482 | 0.7915 | 0.8765 | 0.8784 | 0.9973 | 0.9503 | 0.0 | 0.8653 | 0.7000 | 0.8282 | 0.6555 | 0.9952 | 0.9413 |
| 0.056 | 11.71 | 960 | 0.1908 | 0.7284 | 0.7848 | 0.9478 | 0.9862 | 0.0 | 0.9206 | 0.8434 | 0.9143 | 0.8319 | 0.9973 | 0.9583 | 0.0 | 0.8724 | 0.7308 | 0.8375 | 0.7048 | 0.9948 | 0.9470 |
| 0.1205 | 11.95 | 980 | 0.2000 | 0.7248 | 0.7914 | 0.9457 | 0.9844 | 0.0 | 0.9377 | 0.8256 | 0.8807 | 0.9132 | 0.9986 | 0.9581 | 0.0 | 0.8681 | 0.7249 | 0.8265 | 0.7016 | 0.9947 | 0.9450 |
| 0.0429 | 12.2 | 1000 | 0.1937 | 0.7285 | 0.7865 | 0.9476 | 0.9856 | 0.0 | 0.9298 | 0.8327 | 0.9064 | 0.8529 | 0.9982 | 0.9590 | 0.0 | 0.8687 | 0.7326 | 0.8335 | 0.7108 | 0.9952 | 0.9468 |
| 0.0992 | 12.44 | 1020 | 0.2081 | 0.7225 | 0.7858 | 0.9448 | 0.9863 | 0.0 | 0.9394 | 0.8478 | 0.8780 | 0.8522 | 0.9972 | 0.9576 | 0.0 | 0.8668 | 0.7191 | 0.8230 | 0.6955 | 0.9953 | 0.9441 |
| 0.089 | 12.68 | 1040 | 0.2044 | 0.7243 | 0.7812 | 0.9466 | 0.9832 | 0.0 | 0.9240 | 0.8403 | 0.9128 | 0.8095 | 0.9987 | 0.9561 | 0.0 | 0.8724 | 0.7274 | 0.8351 | 0.6844 | 0.9949 | 0.9457 |
| 0.1252 | 12.93 | 1060 | 0.2146 | 0.7138 | 0.7750 | 0.9417 | 0.9824 | 0.0 | 0.9125 | 0.8768 | 0.8895 | 0.7662 | 0.9973 | 0.9573 | 0.0 | 0.8521 | 0.6993 | 0.8267 | 0.6659 | 0.9950 | 0.9412 |
| 0.3201 | 13.17 | 1080 | 0.2169 | 0.7197 | 0.7849 | 0.9439 | 0.9796 | 0.0 | 0.9489 | 0.8613 | 0.8713 | 0.8352 | 0.9977 | 0.9580 | 0.0 | 0.8636 | 0.7225 | 0.8209 | 0.6774 | 0.9955 | 0.9434 |
| 0.101 | 13.41 | 1100 | 0.2023 | 0.7268 | 0.7848 | 0.9482 | 0.9871 | 0.0 | 0.9314 | 0.8350 | 0.9111 | 0.8315 | 0.9978 | 0.9587 | 0.0 | 0.8702 | 0.7416 | 0.8412 | 0.6806 | 0.9952 | 0.9473 |
| 0.0522 | 13.66 | 1120 | 0.2116 | 0.7207 | 0.7864 | 0.9441 | 0.9915 | 0.0 | 0.9083 | 0.8596 | 0.8771 | 0.8701 | 0.9983 | 0.9580 | 0.0 | 0.8691 | 0.7108 | 0.8236 | 0.6878 | 0.9957 | 0.9437 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
digiplay/CamelliaMIx_2.5D_diffusers | digiplay | 2024-03-06T16:46:20Z | 1,832 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-27T11:22:01Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/44219/camelliamix25d
file name:
camelliamix25D_v2.safetensors |
digiplay/PlanetBumix_v1 | digiplay | 2024-03-06T16:30:45Z | 484 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-18T04:59:37Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/91651/orplanetbumix
Original Author's DEMO image :

Sample image I made:
 |
nitinai/bert-finetuned-ner | nitinai | 2024-03-06T16:28:45Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-06T11:40:41Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0622
- Precision: 0.9343
- Recall: 0.9509
- F1: 0.9425
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0729 | 1.0 | 1756 | 0.0689 | 0.9017 | 0.9293 | 0.9153 | 0.9811 |
| 0.0338 | 2.0 | 3512 | 0.0681 | 0.9278 | 0.9451 | 0.9364 | 0.9850 |
| 0.0213 | 3.0 | 5268 | 0.0622 | 0.9343 | 0.9509 | 0.9425 | 0.9866 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
azcastillo/codeparrot | azcastillo | 2024-03-06T16:21:14Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T16:21:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lihuchen/pearl_base | Lihuchen | 2024-03-06T16:14:00Z | 4,784 | 3 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"Phrase Representation",
"String Matching",
"Fuzzy Join",
"Entity Retrieval",
"transformers",
"en",
"arxiv:2401.10407",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-02-16T22:00:37Z | ---
license: apache-2.0
language:
- en
tags:
- Phrase Representation
- String Matching
- Fuzzy Join
- Entity Retrieval
- transformers
- sentence-transformers
---
## PEARL-base
[Learning High-Quality and General-Purpose Phrase Representations](https://arxiv.org/pdf/2401.10407.pdf). <br>
[Lihu Chen](https://chenlihu.com), [GaΓ«l Varoquaux](https://gael-varoquaux.info/), [Fabian M. Suchanek](https://suchanek.name/).
<br> Accepted by EACL Findings 2024
PEARL-base is a lightweight string embedding model. It is the tool of choice for semantic similarity computation for strings,
creating excellent embeddings for string matching, entity retrieval, entity clustering, fuzzy join...
<br>
It differs from typical sentence embedders because it incorporates phrase type information and morphological features,
allowing it to better capture variations in strings.
The model is a variant of [E5-base](https://huggingface.co/intfloat/e5-base-v2) finetuned on our constructed context-free [dataset](https://zenodo.org/records/10676475) to yield better representations
for phrases and strings. <br>
π€ [PEARL-small](https://huggingface.co/Lihuchen/pearl_small) π€ [PEARL-base](https://huggingface.co/Lihuchen/pearl_base)
π [PEARL Benchmark](https://huggingface.co/datasets/Lihuchen/pearl_benchmark) π [PEARL Leaderboard](https://huggingface.co/spaces/Lihuchen/pearl_leaderboard)
<br>
| Model |Size|Avg| PPDB | PPDB filtered |Turney|BIRD|YAGO|UMLS|CoNLL|BC5CDR|AutoFJ|
|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|
| FastText |-| 40.3| 94.4 | 61.2 | 59.6 | 58.9 |16.9|14.5|3.0|0.2| 53.6|
| Sentence-BERT |110M|50.1| 94.6 | 66.8 | 50.4 | 62.6 | 21.6|23.6|25.5|48.4| 57.2|
| Phrase-BERT |110M|54.5| 96.8 | 68.7 | 57.2 | 68.8 |23.7|26.1|35.4| 59.5|66.9|
| E5-small |34M|57.0| 96.0| 56.8|55.9| 63.1|43.3| 42.0|27.6| 53.7|74.8|
|E5-base|110M| 61.1| 95.4|65.6|59.4|66.3| 47.3|44.0|32.0| 69.3|76.1|
|PEARL-small|34M| 62.5| 97.0|70.2|57.9|68.1| 48.1|44.5|42.4|59.3|75.2|
|PEARL-base|110M|64.8|97.3|72.2|59.7|72.6|50.7|45.8|39.3|69.4|77.1|
Cost comparison of FastText and PEARL. The estimated memory is calculated by the number of parameters (float16). The unit of inference speed is `*ms/512 samples`. The FastText model here is `crawl-300d-2M-subword.bin`.
| Model |Avg Score| Estimated Memory |Speed GPU | Speed CPU |
|-|-|-|-|-|
|FastText|40.3|1200MB|-|57ms|
|PEARL-small|62.5|68MB|42ms|446ms|
|PEARL-base|64.8|220MB|89ms|1394ms|
## Usage
### Sentence Transformers
PEARL is integrated with the Sentence Transformers library (Thanks for [Tom Aarsen](https://huggingface.co/tomaarsen)'s contribution), and can be used like so:
```python
from sentence_transformers import SentenceTransformer, util
query_texts = ["The New York Times"]
doc_texts = [ "NYTimes", "New York Post", "New York"]
input_texts = query_texts + doc_texts
model = SentenceTransformer("Lihuchen/pearl_base")
embeddings = model.encode(input_texts)
scores = util.cos_sim(embeddings[0], embeddings[1:]) * 100
print(scores.tolist())
# [[85.61601257324219, 73.65623474121094, 70.36174774169922]]
```
### Transformers
You can also use `transformers` to use PEARL. Below is an example of entity retrieval, and we reuse the code from E5.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
def encode_text(model, input_texts):
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
return embeddings
query_texts = ["The New York Times"]
doc_texts = [ "NYTimes", "New York Post", "New York"]
input_texts = query_texts + doc_texts
tokenizer = AutoTokenizer.from_pretrained('Lihuchen/pearl_base')
model = AutoModel.from_pretrained('Lihuchen/pearl_base')
# encode
embeddings = encode_text(model, input_texts)
# calculate similarity
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
# expected outputs
# [[85.61601257324219, 73.65624237060547, 70.36172485351562]]
```
## Training and Evaluation
Have a look at our code on [Github](https://github.com/tigerchen52/PEARL)
## Citation
If you find our work useful, please give us a citation:
```
@article{chen2024learning,
title={Learning High-Quality and General-Purpose Phrase Representations},
author={Chen, Lihu and Varoquaux, Ga{\"e}l and Suchanek, Fabian M},
journal={arXiv preprint arXiv:2401.10407},
year={2024}
}
``` |
han2lin/gpt2_med_ppl_0-30 | han2lin | 2024-03-06T16:06:31Z | 173 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T16:05:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
panos-span/Cartpole-v1 | panos-span | 2024-03-06T16:04:52Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T16:04:40Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gonzalezrostani/my_awesome_wnut_model | gonzalezrostani | 2024-03-06T16:04:18Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-06T15:53:46Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2770
- Precision: 0.5528
- Recall: 0.3058
- F1: 0.3938
- Accuracy: 0.9408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2915 | 0.5140 | 0.2558 | 0.3416 | 0.9384 |
| No log | 2.0 | 426 | 0.2770 | 0.5528 | 0.3058 | 0.3938 | 0.9408 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
peldrak/segformer-b5-ade-finetuned-grCoastline | peldrak | 2024-03-06T16:03:55Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/segformer-b5-finetuned-ade-640-640",
"base_model:finetune:nvidia/segformer-b5-finetuned-ade-640-640",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-03-06T15:29:53Z | ---
license: other
base_model: nvidia/segformer-b5-finetuned-ade-640-640
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b5-ade-finetuned-grCoastline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-ade-finetuned-grCoastline
This model is a fine-tuned version of [nvidia/segformer-b5-finetuned-ade-640-640](https://huggingface.co/nvidia/segformer-b5-finetuned-ade-640-640) on the peldrak/grCoastline_512 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2332
- Mean Iou: 0.6951
- Mean Accuracy: 0.7738
- Overall Accuracy: 0.9385
- Accuracy Water: 0.9780
- Accuracy Whitewater: 0.0
- Accuracy Sediment: 0.8899
- Accuracy Other Natural Terrain: 0.7486
- Accuracy Vegetation: 0.9147
- Accuracy Development: 0.8882
- Accuracy Unknown: 0.9971
- Iou Water: 0.9630
- Iou Whitewater: 0.0
- Iou Sediment: 0.7578
- Iou Other Natural Terrain: 0.6446
- Iou Vegetation: 0.8528
- Iou Development: 0.6519
- Iou Unknown: 0.9957
- F1 Score: 0.9381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Whitewater | Accuracy Sediment | Accuracy Other Natural Terrain | Accuracy Vegetation | Accuracy Development | Accuracy Unknown | Iou Water | Iou Whitewater | Iou Sediment | Iou Other Natural Terrain | Iou Vegetation | Iou Development | Iou Unknown | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:-------------------:|:-----------------:|:------------------------------:|:-------------------:|:--------------------:|:----------------:|:---------:|:--------------:|:------------:|:-------------------------:|:--------------:|:---------------:|:-----------:|:--------:|
| 1.5035 | 0.24 | 20 | 1.3099 | 0.4529 | 0.5545 | 0.8166 | 0.7963 | 0.0007 | 0.7870 | 0.3478 | 0.8930 | 0.0604 | 0.9961 | 0.7606 | 0.0002 | 0.4736 | 0.2851 | 0.6104 | 0.0587 | 0.9819 | 0.8036 |
| 1.152 | 0.49 | 40 | 0.8933 | 0.4479 | 0.5546 | 0.8317 | 0.9700 | 0.0 | 0.9422 | 0.1145 | 0.8493 | 0.0126 | 0.9933 | 0.8446 | 0.0 | 0.5935 | 0.1113 | 0.5820 | 0.0126 | 0.9911 | 0.7930 |
| 1.3783 | 0.73 | 60 | 0.6157 | 0.5149 | 0.6084 | 0.8765 | 0.9728 | 0.0 | 0.9618 | 0.4069 | 0.9229 | 0.0020 | 0.9925 | 0.9109 | 0.0 | 0.5742 | 0.3812 | 0.7458 | 0.0020 | 0.9904 | 0.8560 |
| 0.8913 | 0.98 | 80 | 0.5418 | 0.5547 | 0.6332 | 0.8979 | 0.9783 | 0.0 | 0.9290 | 0.5524 | 0.9609 | 0.0173 | 0.9945 | 0.9338 | 0.0 | 0.6919 | 0.4900 | 0.7569 | 0.0173 | 0.9927 | 0.8800 |
| 0.9266 | 1.22 | 100 | 0.4282 | 0.5629 | 0.6421 | 0.9015 | 0.9816 | 0.0 | 0.9457 | 0.5864 | 0.9404 | 0.0447 | 0.9960 | 0.9308 | 0.0 | 0.6778 | 0.5128 | 0.7826 | 0.0443 | 0.9921 | 0.8855 |
| 0.6905 | 1.46 | 120 | 0.3734 | 0.6016 | 0.6758 | 0.9091 | 0.9885 | 0.0 | 0.8944 | 0.6737 | 0.9081 | 0.2712 | 0.9944 | 0.9338 | 0.0 | 0.6825 | 0.5550 | 0.7875 | 0.2598 | 0.9929 | 0.9022 |
| 0.7955 | 1.71 | 140 | 0.3400 | 0.6165 | 0.6850 | 0.9139 | 0.9705 | 0.0 | 0.8788 | 0.6373 | 0.9635 | 0.3481 | 0.9971 | 0.9335 | 0.0 | 0.7052 | 0.5782 | 0.7811 | 0.3233 | 0.9939 | 0.9077 |
| 0.5851 | 1.95 | 160 | 0.2810 | 0.6500 | 0.7179 | 0.9262 | 0.9749 | 0.0 | 0.9234 | 0.7170 | 0.9375 | 0.4775 | 0.9951 | 0.9487 | 0.0 | 0.7511 | 0.6320 | 0.8105 | 0.4145 | 0.9932 | 0.9230 |
| 0.4819 | 2.2 | 180 | 0.3615 | 0.5975 | 0.6868 | 0.8867 | 0.9779 | 0.0 | 0.9149 | 0.8533 | 0.6517 | 0.4151 | 0.9951 | 0.9454 | 0.0 | 0.7422 | 0.4986 | 0.6207 | 0.3815 | 0.9938 | 0.8866 |
| 0.8142 | 2.44 | 200 | 0.2637 | 0.6706 | 0.7485 | 0.9273 | 0.9840 | 0.0 | 0.8588 | 0.8254 | 0.8505 | 0.7243 | 0.9964 | 0.9556 | 0.0 | 0.7779 | 0.6342 | 0.7910 | 0.5411 | 0.9942 | 0.9276 |
| 0.396 | 2.68 | 220 | 0.2642 | 0.6508 | 0.7275 | 0.9186 | 0.9856 | 0.0 | 0.8634 | 0.7820 | 0.8452 | 0.6203 | 0.9958 | 0.9544 | 0.0 | 0.7546 | 0.5842 | 0.7684 | 0.4998 | 0.9943 | 0.9184 |
| 0.7178 | 2.93 | 240 | 0.2661 | 0.6598 | 0.7391 | 0.9222 | 0.9805 | 0.0 | 0.9402 | 0.7458 | 0.8377 | 0.6709 | 0.9988 | 0.9449 | 0.0 | 0.7474 | 0.5932 | 0.7888 | 0.5511 | 0.9934 | 0.9214 |
| 0.5699 | 3.17 | 260 | 0.2108 | 0.6815 | 0.7540 | 0.9326 | 0.9827 | 0.0 | 0.8490 | 0.7546 | 0.9139 | 0.7804 | 0.9975 | 0.9517 | 0.0 | 0.7857 | 0.6459 | 0.8094 | 0.5831 | 0.9947 | 0.9319 |
| 0.3768 | 3.41 | 280 | 0.2256 | 0.6891 | 0.7628 | 0.9363 | 0.9820 | 0.0 | 0.9426 | 0.7842 | 0.8824 | 0.7533 | 0.9952 | 0.9578 | 0.0 | 0.7728 | 0.6568 | 0.8343 | 0.6079 | 0.9941 | 0.9359 |
| 0.7524 | 3.66 | 300 | 0.2295 | 0.6758 | 0.7427 | 0.9314 | 0.9924 | 0.0 | 0.8485 | 0.7430 | 0.9262 | 0.6945 | 0.9944 | 0.9477 | 0.0 | 0.7486 | 0.6275 | 0.8293 | 0.5848 | 0.9929 | 0.9298 |
| 0.3174 | 3.9 | 320 | 0.2178 | 0.6780 | 0.7516 | 0.9319 | 0.9830 | 0.0 | 0.8744 | 0.7954 | 0.8833 | 0.7275 | 0.9979 | 0.9616 | 0.0 | 0.7345 | 0.6312 | 0.8332 | 0.5905 | 0.9953 | 0.9318 |
| 0.2191 | 4.15 | 340 | 0.2237 | 0.6714 | 0.7378 | 0.9311 | 0.9798 | 0.0 | 0.8692 | 0.8005 | 0.9113 | 0.6081 | 0.9954 | 0.9570 | 0.0 | 0.7618 | 0.6360 | 0.8232 | 0.5276 | 0.9944 | 0.9302 |
| 0.2526 | 4.39 | 360 | 0.2836 | 0.6616 | 0.7510 | 0.9192 | 0.9774 | 0.0 | 0.9351 | 0.7824 | 0.7882 | 0.7770 | 0.9973 | 0.9458 | 0.0 | 0.7342 | 0.5903 | 0.7671 | 0.5989 | 0.9951 | 0.9197 |
| 0.2428 | 4.63 | 380 | 0.2368 | 0.6873 | 0.7704 | 0.9316 | 0.9720 | 0.0 | 0.8632 | 0.8565 | 0.8365 | 0.8667 | 0.9981 | 0.9570 | 0.0 | 0.7836 | 0.6500 | 0.7979 | 0.6281 | 0.9948 | 0.9326 |
| 0.138 | 4.88 | 400 | 0.2237 | 0.6921 | 0.7655 | 0.9382 | 0.9765 | 0.0 | 0.8913 | 0.7857 | 0.9087 | 0.7983 | 0.9982 | 0.9618 | 0.0 | 0.7210 | 0.6680 | 0.8635 | 0.6355 | 0.9950 | 0.9382 |
| 0.2074 | 5.12 | 420 | 0.2075 | 0.6903 | 0.7617 | 0.9382 | 0.9855 | 0.0 | 0.8654 | 0.7844 | 0.9180 | 0.7821 | 0.9967 | 0.9614 | 0.0 | 0.7368 | 0.6624 | 0.8591 | 0.6175 | 0.9953 | 0.9377 |
| 0.4849 | 5.37 | 440 | 0.2016 | 0.6957 | 0.7713 | 0.9394 | 0.9873 | 0.0 | 0.8818 | 0.7709 | 0.9085 | 0.8538 | 0.9965 | 0.9497 | 0.0 | 0.7560 | 0.6707 | 0.8621 | 0.6362 | 0.9950 | 0.9390 |
| 0.2335 | 5.61 | 460 | 0.2293 | 0.6909 | 0.7638 | 0.9369 | 0.9848 | 0.0 | 0.8796 | 0.7226 | 0.9309 | 0.8326 | 0.9963 | 0.9568 | 0.0 | 0.7373 | 0.6366 | 0.8538 | 0.6566 | 0.9950 | 0.9358 |
| 0.2768 | 5.85 | 480 | 0.2453 | 0.6866 | 0.7644 | 0.9347 | 0.9764 | 0.0 | 0.9604 | 0.6923 | 0.9087 | 0.8168 | 0.9966 | 0.9562 | 0.0 | 0.7478 | 0.6367 | 0.8314 | 0.6390 | 0.9952 | 0.9335 |
| 0.1383 | 6.1 | 500 | 0.2322 | 0.6865 | 0.7570 | 0.9361 | 0.9547 | 0.0 | 0.8865 | 0.7609 | 0.9475 | 0.7510 | 0.9984 | 0.9443 | 0.0 | 0.7323 | 0.6723 | 0.8516 | 0.6097 | 0.9953 | 0.9354 |
| 0.3734 | 6.34 | 520 | 0.2335 | 0.6890 | 0.7657 | 0.9361 | 0.9877 | 0.0 | 0.8494 | 0.7954 | 0.8919 | 0.8377 | 0.9976 | 0.9586 | 0.0 | 0.7192 | 0.6598 | 0.8515 | 0.6386 | 0.9952 | 0.9360 |
| 0.131 | 6.59 | 540 | 0.2459 | 0.6849 | 0.7626 | 0.9362 | 0.9848 | 0.0 | 0.8292 | 0.7209 | 0.9426 | 0.8628 | 0.9981 | 0.9605 | 0.0 | 0.7203 | 0.6425 | 0.8589 | 0.6173 | 0.9951 | 0.9351 |
| 0.1874 | 6.83 | 560 | 0.2642 | 0.6761 | 0.7504 | 0.9308 | 0.9756 | 0.0 | 0.9512 | 0.7777 | 0.8681 | 0.6823 | 0.9975 | 0.9611 | 0.0 | 0.7349 | 0.6306 | 0.8235 | 0.5873 | 0.9954 | 0.9306 |
| 0.1282 | 7.07 | 580 | 0.2463 | 0.6883 | 0.7588 | 0.9380 | 0.9841 | 0.0 | 0.8800 | 0.7561 | 0.9309 | 0.7636 | 0.9970 | 0.9629 | 0.0 | 0.7370 | 0.6487 | 0.8647 | 0.6093 | 0.9954 | 0.9373 |
| 0.1173 | 7.32 | 600 | 0.2412 | 0.6898 | 0.7661 | 0.9374 | 0.9778 | 0.0 | 0.8615 | 0.7889 | 0.9097 | 0.8267 | 0.9981 | 0.9609 | 0.0 | 0.7282 | 0.6603 | 0.8610 | 0.6225 | 0.9954 | 0.9374 |
| 0.1012 | 7.56 | 620 | 0.2550 | 0.6869 | 0.7623 | 0.9353 | 0.9837 | 0.0 | 0.8689 | 0.7977 | 0.8907 | 0.7985 | 0.9968 | 0.9623 | 0.0 | 0.7376 | 0.6455 | 0.8458 | 0.6219 | 0.9951 | 0.9353 |
| 0.2309 | 7.8 | 640 | 0.2549 | 0.6920 | 0.7595 | 0.9386 | 0.9840 | 0.0 | 0.8390 | 0.7803 | 0.9350 | 0.7801 | 0.9979 | 0.9619 | 0.0 | 0.7304 | 0.6571 | 0.8609 | 0.6381 | 0.9956 | 0.9378 |
| 0.1986 | 8.05 | 660 | 0.2270 | 0.6979 | 0.7668 | 0.9405 | 0.9781 | 0.0 | 0.9063 | 0.7757 | 0.9203 | 0.7895 | 0.9980 | 0.9625 | 0.0 | 0.7509 | 0.6640 | 0.8620 | 0.6503 | 0.9955 | 0.9400 |
| 0.5972 | 8.29 | 680 | 0.2307 | 0.6997 | 0.7714 | 0.9404 | 0.9743 | 0.0 | 0.8955 | 0.7639 | 0.9248 | 0.8432 | 0.9983 | 0.9610 | 0.0 | 0.7591 | 0.6594 | 0.8578 | 0.6650 | 0.9953 | 0.9399 |
| 0.1218 | 8.54 | 700 | 0.2914 | 0.6734 | 0.7541 | 0.9223 | 0.9829 | 0.0 | 0.8578 | 0.8144 | 0.8106 | 0.8157 | 0.9974 | 0.9621 | 0.0 | 0.7434 | 0.5823 | 0.7703 | 0.6606 | 0.9950 | 0.9234 |
| 0.1051 | 8.78 | 720 | 0.2212 | 0.7029 | 0.7755 | 0.9422 | 0.9849 | 0.0 | 0.9237 | 0.7433 | 0.9202 | 0.8598 | 0.9969 | 0.9628 | 0.0 | 0.7848 | 0.6672 | 0.8522 | 0.6582 | 0.9953 | 0.9413 |
| 0.1204 | 9.02 | 740 | 0.2667 | 0.6915 | 0.7636 | 0.9368 | 0.9875 | 0.0 | 0.9098 | 0.6347 | 0.9566 | 0.8601 | 0.9967 | 0.9605 | 0.0 | 0.7803 | 0.6052 | 0.8308 | 0.6681 | 0.9955 | 0.9341 |
| 0.2844 | 9.27 | 760 | 0.2356 | 0.6913 | 0.7635 | 0.9352 | 0.9815 | 0.0 | 0.8728 | 0.7781 | 0.8930 | 0.8201 | 0.9990 | 0.9614 | 0.0 | 0.7466 | 0.6327 | 0.8340 | 0.6686 | 0.9956 | 0.9349 |
| 0.1134 | 9.51 | 780 | 0.2499 | 0.6941 | 0.7668 | 0.9372 | 0.9848 | 0.0 | 0.8818 | 0.7850 | 0.8968 | 0.8228 | 0.9968 | 0.9640 | 0.0 | 0.7495 | 0.6446 | 0.8432 | 0.6619 | 0.9958 | 0.9370 |
| 0.6034 | 9.76 | 800 | 0.2462 | 0.6932 | 0.7635 | 0.9371 | 0.9826 | 0.0 | 0.9008 | 0.7640 | 0.9119 | 0.7903 | 0.9949 | 0.9619 | 0.0 | 0.7692 | 0.6418 | 0.8382 | 0.6471 | 0.9944 | 0.9365 |
| 0.088 | 10.0 | 820 | 0.2285 | 0.7030 | 0.7754 | 0.9417 | 0.9862 | 0.0 | 0.9104 | 0.7763 | 0.9045 | 0.8533 | 0.9969 | 0.9643 | 0.0 | 0.7718 | 0.6665 | 0.8539 | 0.6686 | 0.9956 | 0.9412 |
| 0.0965 | 10.24 | 840 | 0.2332 | 0.6951 | 0.7738 | 0.9385 | 0.9780 | 0.0 | 0.8899 | 0.7486 | 0.9147 | 0.8882 | 0.9971 | 0.9630 | 0.0 | 0.7578 | 0.6446 | 0.8528 | 0.6519 | 0.9957 | 0.9381 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ankursinghbisht/Reinforce-CartPole | ankursinghbisht | 2024-03-06T16:01:04Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T16:00:55Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
han2lin/gpt2_med_ppl_70-100 | han2lin | 2024-03-06T15:50:20Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T15:49:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ezre/LLAMA2-QTSUMM-T2Q-SFT-0306 | Ezre | 2024-03-06T15:50:18Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-03-06T15:50:15Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
itsliupeng/llama2_70b_mmlu | itsliupeng | 2024-03-06T15:50:00Z | 211 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"zh",
"dataset:itsliupeng/mmlu_recall",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-26T11:35:07Z | ---
language:
- en
- zh
license: apache-2.0
datasets:
- itsliupeng/mmlu_recall
model-index:
- name: llama2_70b_mmlu
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.61
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/llama2_70b_mmlu
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.37
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/llama2_70b_mmlu
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.89
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/llama2_70b_mmlu
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.15
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/llama2_70b_mmlu
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/llama2_70b_mmlu
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/llama2_70b_mmlu
name: Open LLM Leaderboard
---
We are utilizing the [mmlu_recall dataset](https://huggingface.co/datasets/itsliupeng/mmlu_recall) to continuously train the [Llama-2-70b-hf model](https://huggingface.co/meta-llama/Llama-2-70b),
aiming to enhance performance on mmlu metrics, while ensuring that other metric performances remain unaffected.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_itsliupeng__llama2_70b_mmlu)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.24|
|AI2 Reasoning Challenge (25-Shot)|65.61|
|HellaSwag (10-Shot) |87.37|
|MMLU (5-Shot) |71.89|
|TruthfulQA (0-shot) |49.15|
|Winogrande (5-shot) |82.40|
|GSM8k (5-shot) |52.99|
|
han2lin/gpt2_med_s15e22_ppl_0-30 | han2lin | 2024-03-06T15:49:30Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T15:49:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits