modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
mssongit/Qwen2-7b-orpo-1 | mssongit | 2024-07-01T02:31:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T02:28:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
caitq-huggingface/llama3-8b-instruct-seqlen-4096-bs-2 | caitq-huggingface | 2024-07-01T02:30:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T02:28:34Z | Entry not found |
mjkenney/my-gemma-2b-arc-finetuned-model | mjkenney | 2024-07-01T02:32:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T02:30:10Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sliceofham/firstmodel | sliceofham | 2024-07-01T02:35:41Z | 0 | 0 | null | [
"license:llama3",
"region:us"
] | null | 2024-07-01T02:35:41Z | ---
license: llama3
---
|
bigstorm/dolphin-2.9.2-qwen2-72b-7.0bpw-8hb-exl2 | bigstorm | 2024-07-01T03:17:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"conversational",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:Qwen/Qwen2-72B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-07-01T02:38:39Z | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-110B/blob/main/LICENSE
base_model: Qwen/Qwen2-72B
tags:
- generated_from_trainer
- axolotl
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---
# BigStorm - ExLLamaV2 (Exl2) Quantization
- 7.0 bpw target
- 8 head bits
Enjoy! Raise an issue if you'd like other BPW levels.
**Base Model Card Follows:**
---
# Dolphin 2.9.2 Qwen2 72B ๐ฌ
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
Our appreciation for the sponsors of Dolphin 2.9.2:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node
This model is based on Qwen2-72b, and is governed by [tongyi-qianwen license](LICENSE)
The base model has 128k context, and the full-weight fine-tuning was with 8k sequence length.
This model was trained FFT on parameters selected by [Laser Scanner](https://github.com/cognitivecomputations/laserRMT/blob/main/laser_scanner.py), using ChatML prompt template format.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9.2 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to Qwen's tongyi-qianwen license. We grant permission for any use, including commercial, that falls within accordance with said license. Dolphin was trained on data generated from GPT4, among other models.
## Evals

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: Qwen/Qwen2-72B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
# load_in_8bit: true
# load_in_4bit: false
# strict: false
datasets:
- path: /workspace/datasets/dolphin-2.9.2/dolphin201-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/dolphin-coder-codegen-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/dolphin-coder-translate-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/not_samantha_norefusals.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/openhermes200k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/Orca-Math-resort-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/SystemChat_sharegpt.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/toolbench_instruct_j1s1_3k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/toolbench_negative_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/toolbench_react_10p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/toolbench_tflan_cot_30p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/agent_instruct_react_unfiltered.jsonl
type: sharegpt
conversation: chatml
unfrozen_parameters:
- ^lm_head.weight$
- ^model.embed_tokens.weight$
# mlp.down_proj layers
- model.layers.62.mlp.down_proj
- model.layers.63.mlp.down_proj
- model.layers.66.mlp.down_proj
- model.layers.65.mlp.down_proj
- model.layers.64.mlp.down_proj
- model.layers.67.mlp.down_proj
- model.layers.68.mlp.down_proj
- model.layers.60.mlp.down_proj
- model.layers.31.mlp.down_proj
- model.layers.69.mlp.down_proj
- model.layers.61.mlp.down_proj
- model.layers.59.mlp.down_proj
- model.layers.70.mlp.down_proj
- model.layers.30.mlp.down_proj
- model.layers.76.mlp.down_proj
- model.layers.72.mlp.down_proj
- model.layers.77.mlp.down_proj
- model.layers.71.mlp.down_proj
- model.layers.29.mlp.down_proj
- model.layers.58.mlp.down_proj
- model.layers.75.mlp.down_proj
- model.layers.32.mlp.down_proj
- model.layers.56.mlp.down_proj
- model.layers.28.mlp.down_proj
- model.layers.26.mlp.down_proj
- model.layers.33.mlp.down_proj
- model.layers.34.mlp.down_proj
- model.layers.57.mlp.down_proj
- model.layers.27.mlp.down_proj
- model.layers.25.mlp.down_proj
- model.layers.35.mlp.down_proj
- model.layers.73.mlp.down_proj
- model.layers.24.mlp.down_proj
- model.layers.78.mlp.down_proj
- model.layers.74.mlp.down_proj
- model.layers.54.mlp.down_proj
# mlp.gate_proj layers
- model.layers.78.mlp.gate_proj
- model.layers.77.mlp.gate_proj
- model.layers.76.mlp.gate_proj
- model.layers.79.mlp.gate_proj
- model.layers.75.mlp.gate_proj
- model.layers.74.mlp.gate_proj
- model.layers.73.mlp.gate_proj
- model.layers.70.mlp.gate_proj
- model.layers.72.mlp.gate_proj
- model.layers.71.mlp.gate_proj
- model.layers.69.mlp.gate_proj
- model.layers.54.mlp.gate_proj
- model.layers.68.mlp.gate_proj
- model.layers.57.mlp.gate_proj
- model.layers.63.mlp.gate_proj
- model.layers.49.mlp.gate_proj
- model.layers.55.mlp.gate_proj
- model.layers.53.mlp.gate_proj
- model.layers.44.mlp.gate_proj
- model.layers.46.mlp.gate_proj
- model.layers.67.mlp.gate_proj
- model.layers.58.mlp.gate_proj
- model.layers.56.mlp.gate_proj
- model.layers.45.mlp.gate_proj
- model.layers.50.mlp.gate_proj
- model.layers.62.mlp.gate_proj
- model.layers.64.mlp.gate_proj
- model.layers.48.mlp.gate_proj
- model.layers.66.mlp.gate_proj
- model.layers.52.mlp.gate_proj
- model.layers.40.mlp.gate_proj
- model.layers.47.mlp.gate_proj
- model.layers.43.mlp.gate_proj
- model.layers.65.mlp.gate_proj
- model.layers.61.mlp.gate_proj
- model.layers.59.mlp.gate_proj
# mlp.up_proj layers
- model.layers.69.mlp.up_proj
- model.layers.70.mlp.up_proj
- model.layers.71.mlp.up_proj
- model.layers.68.mlp.up_proj
- model.layers.67.mlp.up_proj
- model.layers.66.mlp.up_proj
- model.layers.46.mlp.up_proj
- model.layers.63.mlp.up_proj
- model.layers.72.mlp.up_proj
- model.layers.64.mlp.up_proj
- model.layers.62.mlp.up_proj
- model.layers.45.mlp.up_proj
- model.layers.65.mlp.up_proj
- model.layers.73.mlp.up_proj
- model.layers.47.mlp.up_proj
- model.layers.44.mlp.up_proj
- model.layers.49.mlp.up_proj
- model.layers.48.mlp.up_proj
- model.layers.53.mlp.up_proj
- model.layers.74.mlp.up_proj
- model.layers.75.mlp.up_proj
- model.layers.57.mlp.up_proj
- model.layers.76.mlp.up_proj
- model.layers.43.mlp.up_proj
- model.layers.42.mlp.up_proj
- model.layers.61.mlp.up_proj
- model.layers.40.mlp.up_proj
- model.layers.56.mlp.up_proj
- model.layers.60.mlp.up_proj
- model.layers.31.mlp.up_proj
- model.layers.54.mlp.up_proj
- model.layers.55.mlp.up_proj
- model.layers.32.mlp.up_proj
- model.layers.41.mlp.up_proj
- model.layers.33.mlp.up_proj
- model.layers.58.mlp.up_proj
# self_attn.k_proj layers
- model.layers.79.self_attn.k_proj
- model.layers.36.self_attn.k_proj
- model.layers.35.self_attn.k_proj
- model.layers.74.self_attn.k_proj
- model.layers.34.self_attn.k_proj
- model.layers.78.self_attn.k_proj
- model.layers.77.self_attn.k_proj
- model.layers.37.self_attn.k_proj
- model.layers.39.self_attn.k_proj
- model.layers.41.self_attn.k_proj
- model.layers.38.self_attn.k_proj
- model.layers.33.self_attn.k_proj
- model.layers.69.self_attn.k_proj
- model.layers.42.self_attn.k_proj
- model.layers.32.self_attn.k_proj
- model.layers.25.self_attn.k_proj
- model.layers.70.self_attn.k_proj
- model.layers.22.self_attn.k_proj
- model.layers.63.self_attn.k_proj
- model.layers.29.self_attn.k_proj
- model.layers.68.self_attn.k_proj
- model.layers.24.self_attn.k_proj
- model.layers.30.self_attn.k_proj
- model.layers.66.self_attn.k_proj
- model.layers.31.self_attn.k_proj
- model.layers.23.self_attn.k_proj
- model.layers.65.self_attn.k_proj
- model.layers.57.self_attn.k_proj
- model.layers.28.self_attn.k_proj
- model.layers.64.self_attn.k_proj
- model.layers.44.self_attn.k_proj
- model.layers.27.self_attn.k_proj
- model.layers.75.self_attn.k_proj
- model.layers.40.self_attn.k_proj
- model.layers.26.self_attn.k_proj
- model.layers.61.self_attn.k_proj
# self_attn.o_proj layers
- model.layers.14.self_attn.o_proj
- model.layers.39.self_attn.o_proj
- model.layers.19.self_attn.o_proj
- model.layers.16.self_attn.o_proj
- model.layers.17.self_attn.o_proj
- model.layers.15.self_attn.o_proj
- model.layers.69.self_attn.o_proj
- model.layers.12.self_attn.o_proj
- model.layers.42.self_attn.o_proj
- model.layers.23.self_attn.o_proj
- model.layers.22.self_attn.o_proj
- model.layers.29.self_attn.o_proj
- model.layers.13.self_attn.o_proj
- model.layers.46.self_attn.o_proj
- model.layers.52.self_attn.o_proj
- model.layers.26.self_attn.o_proj
- model.layers.38.self_attn.o_proj
- model.layers.41.self_attn.o_proj
- model.layers.18.self_attn.o_proj
- model.layers.49.self_attn.o_proj
- model.layers.11.self_attn.o_proj
- model.layers.28.self_attn.o_proj
- model.layers.25.self_attn.o_proj
- model.layers.47.self_attn.o_proj
- model.layers.53.self_attn.o_proj
- model.layers.27.self_attn.o_proj
- model.layers.37.self_attn.o_proj
- model.layers.20.self_attn.o_proj
- model.layers.43.self_attn.o_proj
- model.layers.44.self_attn.o_proj
- model.layers.45.self_attn.o_proj
- model.layers.30.self_attn.o_proj
- model.layers.24.self_attn.o_proj
- model.layers.21.self_attn.o_proj
- model.layers.10.self_attn.o_proj
- model.layers.3.self_attn.o_proj
# self_attn.q_proj layers
- model.layers.1.self_attn.q_proj
- model.layers.2.self_attn.q_proj
- model.layers.3.self_attn.q_proj
- model.layers.5.self_attn.q_proj
- model.layers.4.self_attn.q_proj
- model.layers.0.self_attn.q_proj
- model.layers.6.self_attn.q_proj
- model.layers.8.self_attn.q_proj
- model.layers.7.self_attn.q_proj
- model.layers.9.self_attn.q_proj
- model.layers.10.self_attn.q_proj
- model.layers.12.self_attn.q_proj
- model.layers.19.self_attn.q_proj
- model.layers.18.self_attn.q_proj
- model.layers.25.self_attn.q_proj
- model.layers.11.self_attn.q_proj
- model.layers.15.self_attn.q_proj
- model.layers.61.self_attn.q_proj
- model.layers.17.self_attn.q_proj
- model.layers.55.self_attn.q_proj
- model.layers.54.self_attn.q_proj
- model.layers.16.self_attn.q_proj
- model.layers.68.self_attn.q_proj
- model.layers.49.self_attn.q_proj
- model.layers.48.self_attn.q_proj
- model.layers.52.self_attn.q_proj
- model.layers.13.self_attn.q_proj
- model.layers.42.self_attn.q_proj
- model.layers.57.self_attn.q_proj
- model.layers.60.self_attn.q_proj
- model.layers.53.self_attn.q_proj
- model.layers.64.self_attn.q_proj
- model.layers.66.self_attn.q_proj
- model.layers.62.self_attn.q_proj
- model.layers.59.self_attn.q_proj
- model.layers.50.self_attn.q_proj
# self_attn.v_proj layers
- model.layers.15.self_attn.v_proj
- model.layers.16.self_attn.v_proj
- model.layers.23.self_attn.v_proj
- model.layers.24.self_attn.v_proj
- model.layers.25.self_attn.v_proj
- model.layers.26.self_attn.v_proj
- model.layers.27.self_attn.v_proj
- model.layers.28.self_attn.v_proj
- model.layers.29.self_attn.v_proj
- model.layers.30.self_attn.v_proj
- model.layers.31.self_attn.v_proj
- model.layers.32.self_attn.v_proj
- model.layers.33.self_attn.v_proj
- model.layers.34.self_attn.v_proj
- model.layers.35.self_attn.v_proj
- model.layers.36.self_attn.v_proj
- model.layers.37.self_attn.v_proj
- model.layers.38.self_attn.v_proj
- model.layers.39.self_attn.v_proj
- model.layers.41.self_attn.v_proj
- model.layers.42.self_attn.v_proj
- model.layers.48.self_attn.v_proj
- model.layers.53.self_attn.v_proj
- model.layers.57.self_attn.v_proj
- model.layers.58.self_attn.v_proj
- model.layers.59.self_attn.v_proj
- model.layers.61.self_attn.v_proj
- model.layers.63.self_attn.v_proj
- model.layers.64.self_attn.v_proj
- model.layers.65.self_attn.v_proj
- model.layers.66.self_attn.v_proj
- model.layers.69.self_attn.v_proj
- model.layers.74.self_attn.v_proj
- model.layers.75.self_attn.v_proj
- model.layers.76.self_attn.v_proj
- model.layers.72.self_attn.v_proj
chat_template: chatml
dataset_prepared_path: qwen2-72b-data
val_set_size: 0.01
output_dir: qwen2-72b
sequence_len: 8192 # supports up to 8192
sample_packing: true
pad_to_sequence_len: true
# adapter: lora
# lora_model_dir:
# lora_r: 32
# lora_alpha: 16
# lora_dropout: 0.05
# lora_target_linear: true
# lora_fan_in_fan_out:
wandb_project: qwen2-72b
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 2
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 4
save_total_limit: 2
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|endoftext|>"
eos_token: "<|im_end|>"
```
|
wiliie/bert-finetuned-ner | wiliie | 2024-07-01T03:01:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:bert-base-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-01T02:41:30Z | ---
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9373342175066313
- name: Recall
type: recall
value: 0.9515314708852238
- name: F1
type: f1
value: 0.9443794888926006
- name: Accuracy
type: accuracy
value: 0.9862836286572084
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
- Precision: 0.9373
- Recall: 0.9515
- F1: 0.9444
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0771 | 1.0 | 1756 | 0.0615 | 0.9157 | 0.9384 | 0.9269 | 0.9828 |
| 0.0354 | 2.0 | 3512 | 0.0589 | 0.9292 | 0.9470 | 0.9380 | 0.9857 |
| 0.0219 | 3.0 | 5268 | 0.0602 | 0.9373 | 0.9515 | 0.9444 | 0.9863 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
sgdkn/model_id | sgdkn | 2024-07-01T02:42:24Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-07-01T02:41:55Z | Entry not found |
y1xing/llama-3-8b-Instruct-bnb-4bit-learn-marking-synthetic-data | y1xing | 2024-07-01T02:45:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T02:45:50Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** y1xing
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cukmanish/best_model.h5 | cukmanish | 2024-07-01T02:47:30Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T02:47:30Z | Entry not found |
lvjianjin/pokemon-lora | lvjianjin | 2024-07-01T02:48:09Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T02:48:09Z | Entry not found |
xuxinrui/result | xuxinrui | 2024-07-01T02:48:51Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T02:48:51Z | Entry not found |
houbw/llama3_8b_bnb_4bit_ruozhiba_method_10 | houbw | 2024-07-01T02:53:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T02:53:18Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** houbw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MoRa2001/vit-base-patch16-224-in21k-finetuned-lora-food101 | MoRa2001 | 2024-07-01T03:03:52Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-07-01T02:55:36Z | Entry not found |
bumick/Llama3-appsealing-8b-Instruct | bumick | 2024-07-01T03:00:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-07-01T02:57:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kudod/xlm-roberta-large-finetuned-ner-vlsp2021-3090-1July-1 | Kudod | 2024-07-01T04:29:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-01T02:59:45Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-large-finetuned-ner-vlsp2021-3090-1July-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-ner-vlsp2021-3090-1July-1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1332
- eval_ATETIME: {'precision': 0.8748768472906404, 'recall': 0.8862275449101796, 'f1': 0.8805156172533465, 'number': 1002}
- eval_DDRESS: {'precision': 0.7837837837837838, 'recall': 1.0, 'f1': 0.8787878787878788, 'number': 29}
- eval_ERSON: {'precision': 0.9496365524402908, 'recall': 0.9631384939441812, 'f1': 0.9563398692810458, 'number': 1899}
- eval_ERSONTYPE: {'precision': 0.7142857142857143, 'recall': 0.7602339181286549, 'f1': 0.7365439093484419, 'number': 684}
- eval_HONENUMBER: {'precision': 1.0, 'recall': 0.8888888888888888, 'f1': 0.9411764705882353, 'number': 9}
- eval_ISCELLANEOUS: {'precision': 0.5521472392638037, 'recall': 0.5660377358490566, 'f1': 0.5590062111801242, 'number': 159}
- eval_MAIL: {'precision': 1.0, 'recall': 0.9803921568627451, 'f1': 0.99009900990099, 'number': 51}
- eval_OCATION: {'precision': 0.8478731074260994, 'recall': 0.9039200614911607, 'f1': 0.875, 'number': 1301}
- eval_P: {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 11}
- eval_RL: {'precision': 0.5789473684210527, 'recall': 0.7333333333333333, 'f1': 0.6470588235294117, 'number': 15}
- eval_RODUCT: {'precision': 0.7018739352640545, 'recall': 0.6592, 'f1': 0.6798679867986799, 'number': 625}
- eval_overall_precision: 0.8469
- eval_overall_recall: 0.8683
- eval_overall_f1: 0.8575
- eval_overall_accuracy: 0.9793
- eval_runtime: 38.9411
- eval_samples_per_second: 64.919
- eval_steps_per_second: 16.23
- epoch: 7.0
- step: 22841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
davidyu2023/Qwen-Qwen1.5-0.5B-1719803002 | davidyu2023 | 2024-07-01T03:03:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2024-07-01T03:03:22Z | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
dbaranchuk/check | dbaranchuk | 2024-07-01T04:06:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mobilenet_v2",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-07-01T03:04:37Z | Entry not found |
apwic/summarization-base-2 | apwic | 2024-07-01T06:27:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"id",
"base_model:LazarusNLP/IndoNanoT5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-07-01T03:05:30Z | ---
language:
- id
license: apache-2.0
base_model: LazarusNLP/IndoNanoT5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarization-base-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization-base-2
This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4862
- Rouge1: 0.3867
- Rouge2: 0.0
- Rougel: 0.3833
- Rougelsum: 0.386
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.6352 | 1.0 | 3573 | 0.4614 | 0.3871 | 0.0 | 0.3846 | 0.3884 | 1.0 |
| 0.4361 | 2.0 | 7146 | 0.4357 | 0.3574 | 0.0 | 0.3543 | 0.3544 | 1.0 |
| 0.3391 | 3.0 | 10719 | 0.4479 | 0.3973 | 0.0 | 0.3975 | 0.4009 | 1.0 |
| 0.2686 | 4.0 | 14292 | 0.4639 | 0.4113 | 0.0 | 0.4102 | 0.4115 | 1.0 |
| 0.2221 | 5.0 | 17865 | 0.4862 | 0.3867 | 0.0 | 0.3833 | 0.386 | 1.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
MariaGbrl/openai-gpt2-large-fine-tuned | MariaGbrl | 2024-07-01T11:03:20Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:openai-community/gpt2-large",
"license:mit",
"region:us"
] | null | 2024-07-01T03:07:11Z | ---
base_model: openai-community/gpt2-large
datasets:
- generator
library_name: peft
license: mit
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: openai-gpt2-large-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai-gpt2-large-fine-tuned
This model is a fine-tuned version of [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.3.1
- Datasets 2.16.1
- Tokenizers 0.15.2 |
tatsuMura/furnituredatasets | tatsuMura | 2024-07-03T01:26:40Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-07-01T03:07:51Z | ---
license: cc-by-nc-4.0
---
|
Sanchirmaa/shuuh_model | Sanchirmaa | 2024-07-01T03:09:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T03:08:45Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Sanchirmaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
davidyu2023/Qwen-Qwen1.5-1.8B-1719803429 | davidyu2023 | 2024-07-01T03:10:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2024-07-01T03:10:30Z | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
Charles1210/FocusedLLama | Charles1210 | 2024-07-01T03:11:33Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T03:11:33Z | Entry not found |
didev007/qwen1.5-llm | didev007 | 2024-07-01T03:13:17Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T03:13:17Z | Entry not found |
NoNameFactory/llama-3-8b-it-4bit-ContdPT_4_10 | NoNameFactory | 2024-07-01T03:18:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T03:15:37Z | ---
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** NoNameFactory
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
davidyu2023/google-gemma-2b-1719803739 | davidyu2023 | 2024-07-01T03:16:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"region:us"
] | null | 2024-07-01T03:15:39Z | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
yitzshapiro/Qwen2-7B-Instruct-90s-Commercials-v2 | yitzshapiro | 2024-07-01T03:18:42Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"base_model:Qwen/Qwen2-7B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T03:16:53Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: Qwen/Qwen2-7B-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
JuanCena/Shondo | JuanCena | 2024-07-01T03:25:20Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-01T03:19:42Z | ---
license: openrail
---
|
Beyond08/Introduction | Beyond08 | 2024-07-01T03:21:12Z | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | 2024-07-01T03:21:12Z | ---
license: unknown
---
|
statking/gemma9bit_rec_lora_kalora_test_fold0_classweight_1024_full | statking | 2024-07-01T03:21:54Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T03:21:54Z | Entry not found |
jasonk19/essay-scoring-5-epoch-batch-16-lr-1e-5-dropout-01 | jasonk19 | 2024-07-01T03:23:54Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T03:23:54Z | Entry not found |
tacmatic/yolov10-finetuned | tacmatic | 2024-07-01T03:24:58Z | 0 | 0 | ultralytics | [
"ultralytics",
"safetensors",
"object-detection",
"computer-vision",
"yolov10",
"dataset:detection-datasets/coco",
"arxiv:2405.14458",
"license:agpl-3.0",
"region:us"
] | object-detection | 2024-07-01T03:24:56Z | ---
license: agpl-3.0
library_name: ultralytics
tags:
- object-detection
- computer-vision
- yolov10
datasets:
- detection-datasets/coco
repo_url: https://github.com/THU-MIG/yolov10
inference: false
---
### Model Description
[YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458v1)
- arXiv: https://arxiv.org/abs/2405.14458v1
- github: https://github.com/THU-MIG/yolov10
### Installation
```
pip install git+https://github.com/THU-MIG/yolov10.git
```
### Training and validation
```python
from ultralytics import YOLOv10
model = YOLOv10.from_pretrained('jameslahm/yolov10n')
# Training
model.train(...)
# after training, one can push to the hub
model.push_to_hub("your-hf-username/yolov10-finetuned")
# Validation
model.val(...)
```
### Inference
Here's an end-to-end example showcasing inference on a cats image:
```python
from ultralytics import YOLOv10
model = YOLOv10.from_pretrained('jameslahm/yolov10n')
source = 'http://images.cocodataset.org/val2017/000000039769.jpg'
model.predict(source=source, save=True)
```
which shows:

### BibTeX Entry and Citation Info
```
@article{wang2024yolov10,
title={YOLOv10: Real-Time End-to-End Object Detection},
author={Wang, Ao and Chen, Hui and Liu, Lihao and Chen, Kai and Lin, Zijia and Han, Jungong and Ding, Guiguang},
journal={arXiv preprint arXiv:2405.14458},
year={2024}
}
``` |
ndsolo/llama3-8b-cosmic-fusion-dynamics-lora | ndsolo | 2024-07-01T03:26:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T03:26:28Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ndsolo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Haary/TinyLlama-1.1B-Adapter-Unsloth | Haary | 2024-07-01T03:31:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T03:31:06Z | ---
base_model: unsloth/tinyllama-chat-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Haary
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
akshatxv/LCS2GPTNeo | akshatxv | 2024-07-01T03:31:44Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T03:31:44Z | Entry not found |
jefson08/GPT2Khasi | jefson08 | 2024-07-01T03:38:22Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T03:37:08Z | ---
license: mit
---
|
Yharaoj12/Ok | Yharaoj12 | 2024-07-01T03:43:23Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T03:43:23Z | Entry not found |
yjwon/zephyr_sft_dpo_beta1e-1_epoch1 | yjwon | 2024-07-01T03:47:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T03:44:13Z | Entry not found |
hiepdaoquang704/IU_style | hiepdaoquang704 | 2024-07-01T03:57:51Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T03:52:19Z | Entry not found |
statking/gemma9bit_rec_lora_kalora_test_fold0_classweight_2048_full | statking | 2024-07-01T03:52:26Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T03:52:26Z | Entry not found |
kP0pR3bel/Momo-All-Round | kP0pR3bel | 2024-07-01T03:54:16Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-01T03:52:39Z | ---
license: openrail
---
|
mjoys/jinpan-13B | mjoys | 2024-07-02T07:40:17Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T03:55:00Z | ---
license: apache-2.0
---
# jinpan-13B
**ๆตๅคงไบบๅทฅๆบ่ฝ็ ็ฉถๆ+ๆธ่ฑก็งๆ๏ผ2023ๅนด8ๆ21ๆฅ่ๅๅๅธๅ็ดไบ้่้ถๅฎ็่ฏญ่จๅคงๆจกๅ**
* ใ้็ฃๅคงๆจกๅใๆฏๆตๆฑๅคงๅญฆๅๆธ่ฑก็งๆ่ๅๅๅธ็ไธไธช่ชไธป็ ๅ็ๅ็ด้่็่ฏญ่จๅคงๆจกๅ๏ผ็ฎๅๆจกๅ่งๆจก7Bใ13B๏ผๅฏ่ฟไธๆญฅๆฉๅฑใ่ฎญ็ป็ๆฐๆฎ้ๅ็ดไบ้ถๅฎ้่ๆนๅ๏ผๆถต็ไบ้่ไนฆ็ฑใ่ฎบๆ๏ผ้่็ฅ่ฏๅพ่ฐฑใ้่ๅฏน่ฏๆๆฌ็ญๅค็งๆฐๆฎๆบ
* ใ้็ฃๅคงๆจกๅใ็็ฎๆ ๆฏไธบ้่ๆบๆๆไพ้ซๆใๆบ่ฝใๅฏไฟก่ต็่ฏญ่จๆๅก๏ผๅ
ๆฌ้่็ฅ่ฏ้ฎ็ญใ้่ๆๆฌ็ๆใ้่็ฅ่ฏๆจ็ๅๆ็ญๅค็งๅบ็จๅบๆฏใ
* ใๆ ธๅฟ็นๅพใ่งฃๅณ้็จๅคงๆจกๅ็ผบไน็ฒพๅๅฎ้่งฃๅณไธๅก้ฎ้ข็่ฝๅ๏ผๅจ้่้ถๅฎ้ขๅๅฎ็ฐ็ฒพ่ฐ้้
ใไธไธ็ฅ่ฏๆณจๅ
ฅใๅค้จ็ฅ่ฏๅบๅๅ๏ผไปฅๅ่งฃๅณๆจกๅ่พๅบๅ
ๅฎนๅๆจกๅๅๆฐใ่ฎญ็ปๆฐๆฎๅๅคๆบ็ฅ่ฏ็ๅฏ่งฃ้ๆงๆบฏๆบใๅๆถๅ
ทๆๅคงๆจกๅๅบ็ก่ฝๅๅ้่้ขๅๆณๅ่ฝๅ๏ผๆจกๅไฝ็งฏๅฐ๏ผๅๆฐ้้ไธญ๏ผๅไธช้่ไผไธๆ้็ฎๅๅฏไปฅๆฟ่ฝฝ๏ผๅฏไปฅ็งๆๅ้จ็ฝฒ๏ผๆปก่ถณๆฐๆฎๅฎๅ
จๅ่ง่ฆๆฑ๏ผๅๆถๆจกๅ่ฝ็ปๅๅ็งๆฌๅฐๆฐๆฎไปฅๅ้ๆๆ็ดข็ปไปถไปฅๅขๅผบๆจกๅ่พๅบ่ฝๅใ
## Quickstart
### ๐ค Hugging Face Transformers
Here we show a code snippet to show you how to use the chat model with `transformers`:
```python
import os
DEVICE_ID = "0"
os.environ['CUDA_VISIBLE_DEVICES'] = DEVICE_ID
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "mjoys/jinpan-13B"
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "ๆไนๅ็ไฟก็จๅก"
messages = [
{"role": "system", "content": "You are a helpful assistant. ไฝ ๆฏไธไธชไนไบๅฉไบบ็ๅฉๆใ"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
|
j9uv1n0/cl | j9uv1n0 | 2024-07-01T03:59:10Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-01T03:58:44Z | ---
license: openrail
---
|
yjwon/zephyr_sft_dpo_beta1e-1_epoch2 | yjwon | 2024-07-01T04:03:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T03:59:49Z | Entry not found |
AUSWEIS/AUSWEIS | AUSWEIS | 2024-07-01T04:03:36Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:03:36Z | Entry not found |
caitq-huggingface/llama3-8b-instruct-seqlen-4096-bs-1-v22 | caitq-huggingface | 2024-07-01T04:06:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T04:05:02Z | Entry not found |
TomEijkelenkamp/renaissance-cogvlm-contrast | TomEijkelenkamp | 2024-07-01T04:06:48Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:06:48Z | Entry not found |
mintjoa/trained-sd3 | mintjoa | 2024-07-01T04:09:59Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:09:59Z | Entry not found |
AN64M/BaldiRVCmodels | AN64M | 2024-07-01T04:15:29Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:11:00Z | Entry not found |
j9uv1n0/minzy | j9uv1n0 | 2024-07-01T04:14:58Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-01T04:13:36Z | ---
license: openrail
---
|
habulaj/1136714145 | habulaj | 2024-07-01T04:13:47Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:13:41Z | Entry not found |
Kelcin/test_model | Kelcin | 2024-07-01T04:18:33Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:18:33Z | Entry not found |
imagepipeline/choke | imagepipeline | 2024-07-01T04:18:46Z | 0 | 0 | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-07-01T04:18:42Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## choke
<img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - choke
[](https://imagepipeline.io/models/choke?id=f025dc63-d60b-4617-a2d4-7ce15786e92c/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "sd1.5",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "f025dc63-d60b-4617-a2d4-7ce15786e92c",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### ๐ Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
amyc20230713/merged16bit_model2 | amyc20230713 | 2024-07-01T06:50:57Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T04:20:33Z | Entry not found |
amyc20230713/lora_model2 | amyc20230713 | 2024-07-01T06:37:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"region:us"
] | null | 2024-07-01T04:20:53Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
sungju12/my_awesome_billsum_model | sungju12 | 2024-07-01T04:23:46Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:23:46Z | Entry not found |
to100mak/lora_model | to100mak | 2024-07-01T04:26:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:beomi/Llama-3-Open-Ko-8B-Instruct-preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T04:26:03Z | ---
base_model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** to100mak
- **License:** apache-2.0
- **Finetuned from model :** beomi/Llama-3-Open-Ko-8B-Instruct-preview
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
habulaj/80729218 | habulaj | 2024-07-01T04:26:52Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:26:47Z | Entry not found |
ZeZanZiet/processor_image_captioning | ZeZanZiet | 2024-07-01T04:27:40Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T04:27:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bajiyo/w2v2bert-lm-studiorecs | Bajiyo | 2024-07-01T11:17:24Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T04:29:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/11446089196 | habulaj | 2024-07-01T04:29:49Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:29:47Z | Entry not found |
Alifnfa/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 | Alifnfa | 2024-07-01T04:31:47Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:31:47Z | Entry not found |
avnishkanungo/distilhubert-finetuned-gtzan | avnishkanungo | 2024-07-01T05:48:39Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-07-01T04:35:42Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4577
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9562 | 1.0 | 113 | 1.8362 | 0.5 |
| 1.1877 | 2.0 | 226 | 1.2579 | 0.62 |
| 1.0263 | 3.0 | 339 | 1.0316 | 0.69 |
| 0.6373 | 4.0 | 452 | 0.7494 | 0.84 |
| 0.5875 | 5.0 | 565 | 0.6581 | 0.85 |
| 0.428 | 6.0 | 678 | 0.5088 | 0.89 |
| 0.3152 | 7.0 | 791 | 0.4619 | 0.86 |
| 0.1577 | 8.0 | 904 | 0.4274 | 0.88 |
| 0.2456 | 9.0 | 1017 | 0.4739 | 0.88 |
| 0.0905 | 10.0 | 1130 | 0.4577 | 0.87 |
### Framework versions
- Transformers 4.43.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
yjwon/zephyr_sft_dpo_beta1e-2_epoch3 | yjwon | 2024-07-01T04:43:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T04:39:56Z | Entry not found |
Dev372/HarshDev-whisper-small-English_4000_new | Dev372 | 2024-07-01T11:27:53Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-01T04:41:52Z | Entry not found |
houbw/llama38b_ruozhiba_1 | houbw | 2024-07-01T04:42:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T04:42:16Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** houbw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hieuman/GenC-Mistral | Hieuman | 2024-07-01T04:42:18Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T04:42:18Z | ---
license: apache-2.0
---
|
basitkanjoo/Sidhu | basitkanjoo | 2024-07-01T04:42:43Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:42:43Z | Entry not found |
yjwon/zephyr_sft_dpo_beta1e-2_epoch4 | yjwon | 2024-07-01T04:56:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T04:43:57Z | Entry not found |
GraydientPlatformAPI/loras-june30 | GraydientPlatformAPI | 2024-07-01T05:29:56Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:50:03Z | Entry not found |
Dimeltech/Tenezis-8B-Instruct | Dimeltech | 2024-07-01T09:49:04Z | 0 | 0 | null | [
"safetensors",
"text-generation",
"fr",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-07-01T04:50:56Z | ---
license: apache-2.0
language:
- fr
- en
metrics:
- accuracy
pipeline_tag: text-generation
--- |
habulaj/668544122 | habulaj | 2024-07-01T04:53:32Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:53:26Z | Entry not found |
habulaj/54526591 | habulaj | 2024-07-01T04:56:33Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T04:56:25Z | Entry not found |
priiiiiii/whisper-large-hindi-finetuned | priiiiiii | 2024-07-01T05:00:52Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T05:00:52Z | Entry not found |
tomreichel/repair-tokenizer | tomreichel | 2024-07-01T05:01:29Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T05:01:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
v0dkapapi/attempt5 | v0dkapapi | 2024-07-01T05:49:56Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2024-07-01T05:03:13Z | ---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: attempt5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# attempt5
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 150
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.3.0+cu121
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Huy227/adapter_v7 | Huy227 | 2024-07-01T05:05:24Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:google/gemma-2-9b-it",
"license:other",
"region:us"
] | null | 2024-07-01T05:05:01Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: google/gemma-2-9b-it
model-index:
- name: sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on the identity, the sft_final_data and the chat_sharegpt datasets.
It achieves the following results on the evaluation set:
- Loss: 1.2501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6945 | 2.0060 | 500 | 1.1457 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1 |
idiotDeveloper/vts_to_text_1.0 | idiotDeveloper | 2024-07-01T05:16:29Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:jwanZZANG/vts_sample_data_1.0",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-01T05:05:08Z | ---
language:
- ko
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- jwanZZANG/vts_sample_data_1.0
model-index:
- name: jwanZZANG/vts_to_text_1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jwanZZANG/vts_to_text_1.0
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the jwanZZANG/vts_sample_data_1.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
habulaj/267229272785 | habulaj | 2024-07-01T05:05:54Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T05:05:51Z | Entry not found |
Alifnfa/model | Alifnfa | 2024-07-01T05:07:45Z | 0 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-07-01T05:06:06Z | Entry not found |
rishab12023/t5-small-finetuned-xsum | rishab12023 | 2024-07-01T09:40:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-07-01T05:07:46Z | Entry not found |
anupam69/mistral_instruct_greenbonds | anupam69 | 2024-07-01T05:09:03Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T05:08:41Z | Entry not found |
BaurRustem/llava-1.5-7b-hf-test | BaurRustem | 2024-07-01T05:09:28Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T05:09:28Z | Entry not found |
BaurRustem/llava-1.5-7b-hf-ft-mix-vsft | BaurRustem | 2024-07-01T15:53:15Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:llava-hf/llava-1.5-7b-hf",
"region:us"
] | null | 2024-07-01T05:09:31Z | ---
base_model: llava-hf/llava-1.5-7b-hf
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llava-1.5-7b-hf-ft-mix-vsft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llava-1.5-7b-hf-ft-mix-vsft
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
kwkwkwkwpark/pegasus-samsum | kwkwkwkwpark | 2024-07-01T05:09:35Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T05:09:35Z | Entry not found |
achoi1107/dummy | achoi1107 | 2024-07-01T05:12:19Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T05:10:34Z | # Dummy model
This is a dummy model |
jiyi/hyact-qwen | jiyi | 2024-07-01T06:16:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T05:11:19Z | ---
license: apache-2.0
---
|
kodoqmc/XTTS-v2_PeterDrury | kodoqmc | 2024-07-01T07:20:30Z | 0 | 1 | coqui | [
"coqui",
"text-to-speech",
"license:other",
"region:us"
] | text-to-speech | 2024-07-01T05:14:33Z | ---
license: other
license_name: coqui-public-model-license
license_link: https://coqui.ai/cpml
library_name: coqui
pipeline_tag: text-to-speech
widget:
- text: "Once when I was six years old I saw a magnificent picture"
---
# โTTS_v2 - Peter Drury Fine-Tuned Model
This repository hosts a fine-tuned version of the โTTS model, utilizing 2.3 minutes of unique voice lines from Peter Drury, The voice lines were sourced from he's podcast with JOE on youtube, can be found here:
[Peter Drury RANKS His Best Commentary Moments & Reveals Commentary Secrets! MESSI WIN WORLD CUP!](https://www.youtube.com/watch?v=ibT6PINpyaw&t)

Listen to a sample of the โTTS_v2 - Peter Drury Fine-Tuned Model:
<audio controls>
<source src="https://huggingface.co/kodoqmc/XTTS-v2_PeterDrury/resolve/main/fromtts.wav" type="audio/wav">
Your browser does not support the audio element.
</audio>
Here's a Peter Drury mp3 voice line clip from the training data:
<audio controls>
<source src="https://huggingface.co/kodoqmc/XTTS-v2_PeterDrury/resolve/main/reference.wav" type="audio/wav">
Your browser does not support the audio element.
</audio>
## Features
- ๐๏ธ **Voice Cloning**: Realistic voice cloning with just a short audio clip.
- ๐ **Multi-Lingual Support**: Generates speech in 17 different languages while maintaining Peter Drury's voice.
- ๐ **Emotion & Style Transfer**: Captures the emotional tone and style of the original voice.
- ๐ **Cross-Language Cloning**: Maintains the unique voice characteristics across different languages.
- ๐ง **High-Quality Audio**: Outputs at a 24kHz sampling rate for clear and high-fidelity audio.
## Supported Languages
The model supports the following 17 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko), and Hindi (hi).
## Usage in Roll Cage
๐ค๐ฌ Boost your AI experience with this Ollama add-on! Enjoy real-time audio ๐๏ธ and text ๐ chats, LaTeX rendering ๐, agent automations โ๏ธ, workflows ๐, text-to-image ๐โก๏ธ๐ผ๏ธ, image-to-text ๐ผ๏ธโก๏ธ๐ค, image-to-video ๐ผ๏ธโก๏ธ๐ฅ transformations. Fine-tune text ๐, voice ๐ฃ๏ธ, and image ๐ผ๏ธ gens. Includes Windows macro controls ๐ฅ๏ธ and DuckDuckGo search.
[ollama_agent_roll_cage (OARC)](https://github.com/Leoleojames1/ollama_agent_roll_cage) is a completely local Python & CMD toolset add-on for the Ollama command line interface. The OARC toolset automates the creation of agents, giving the user more control over the likely output. It provides SYSTEM prompt templates for each ./Modelfile, allowing users to design and deploy custom agents quickly. Users can select which local model file is used in agent construction with the desired system prompt.
## CoquiTTS and Resources
- ๐ธ๐ฌ **CoquiTTS**: [Coqui TTS on GitHub](https://github.com/coqui-ai/TTS)
- ๐ **Documentation**: [ReadTheDocs](https://tts.readthedocs.io/en/latest/)
- ๐ฉโ๐ป **Questions**: [GitHub Discussions](https://github.com/coqui-ai/TTS/discussions)
- ๐ฏ **Community**: [Discord](https://discord.gg/5eXr5seRrv)
## License
This model is licensed under the [Coqui Public Model License](https://coqui.ai/cpml). Read more about the origin story of CPML [here](https://coqui.ai/blog/tts/cpml).
## Contact
Join our ๐ธCommunity on [Discord](https://discord.gg/fBC58unbKE) and follow us on [Twitter](https://twitter.com/coqui_ai). For inquiries, email us at [email protected].
Using ๐ธTTS API:
```python
from TTS.api import TTS
tts = TTS(model_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/",
config_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/config.json", progress_bar=False, gpu=True).to(self.device)
# generate speech by cloning a voice using default settings
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
file_path="output.wav",
speaker_wav="/path/to/target/speaker.wav",
language="en")
```
Using ๐ธTTS Command line:
```console
tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
--text "Bugรผn okula gitmek istemiyorum." \
--speaker_wav /path/to/target/speaker.wav \
--language_idx tr \
--use_cuda true
```
Using the model directly:
```python
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts
config = XttsConfig()
config.load_json("/path/to/xtts/config.json")
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
model.cuda()
outputs = model.synthesize(
"It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
config,
speaker_wav="/data/TTS-public/_refclips/3.wav",
gpt_cond_len=3,
language="en",
)
```
|
yjwon/zephyr_sft_dpo_beta5e-2_epoch1 | yjwon | 2024-07-01T05:22:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T05:18:22Z | Entry not found |
yoko119/distilbert-base-uncased-finetuned-emotion | yoko119 | 2024-07-01T05:30:08Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-01T05:25:50Z | Entry not found |
yjwon/zephyr_sft_dpo_beta5e-2_epoch2 | yjwon | 2024-07-01T05:31:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T05:27:01Z | Entry not found |
erwannd/llava-1.5-7b-finetune-cord-1 | erwannd | 2024-07-01T05:52:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T05:28:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kakaou/test | kakaou | 2024-07-01T05:35:31Z | 0 | 0 | null | [
"ko",
"region:us"
] | null | 2024-07-01T05:31:55Z | ---
language:
- ko
--- |
houbw/llama38b_ruozhiba_2 | houbw | 2024-07-01T05:32:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T05:32:21Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** houbw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ketansomewhere/cifar10_conditional_diffusion1 | Ketansomewhere | 2024-07-01T17:39:38Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"Class conditioned Diffusion",
"CIFAR10 Diffusion",
"en",
"license:mit",
"region:us"
] | null | 2024-07-01T05:34:37Z | ---
license: mit
language:
- en
library_name: diffusers
tags:
- Class conditioned Diffusion
- CIFAR10 Diffusion
---
Here is Custom Pipeline for Class conditioned diffusion model. For training script, pipeline, tutorial nb and sampling please check my Github Repo:- https://github.com/KetanMann/Class_Conditioned_Diffusion_Training_Script
Here is Class Conditional Diffusion Pipeline and Sampling.
<div align="center">
<img src="grid_images.gif" alt="Class Conditioned Diffusion GIF">
</div>
Firstly install Diffusers
```bash
!pip install git+https://github.com/huggingface/diffusers
```
Then login to your huggingface account.
```bash
from huggingface_hub import notebook_login
notebook_login()
```
Finally for sampling and model testing. Run these lines of code.
```bash
from diffusers import UNet2DModel, DDPMScheduler
from diffusers.utils.torch_utils import randn_tensor
from diffusers.pipelines.pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from huggingface_hub import hf_hub_download
import torch
import os
from PIL import Image
import matplotlib.pyplot as plt
from typing import List, Optional, Tuple, Union
class DDPMPipelinenew(DiffusionPipeline):
def __init__(self, unet, scheduler, num_classes: int):
super().__init__()
self.register_modules(unet=unet, scheduler=scheduler)
self.num_classes = num_classes
self._device = unet.device # Ensure the pipeline knows the device
@torch.no_grad()
def __call__(
self,
batch_size: int = 64,
class_labels: Optional[torch.Tensor] = None,
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
num_inference_steps: int = 1000,
output_type: Optional[str] = "pil",
return_dict: bool = True,
) -> Union[ImagePipelineOutput, Tuple]:
# Ensure class_labels is on the same device as the model
class_labels = class_labels.to(self._device)
if class_labels.ndim == 0:
class_labels = class_labels.unsqueeze(0).expand(batch_size)
else:
class_labels = class_labels.expand(batch_size)
# Sample gaussian noise to begin loop
if isinstance(self.unet.config.sample_size, int):
image_shape = (
batch_size,
self.unet.config.in_channels,
self.unet.config.sample_size,
self.unet.config.sample_size,
)
else:
image_shape = (batch_size, self.unet.config.in_channels, *self.unet.config.sample_size)
image = randn_tensor(image_shape, generator=generator, device=self._device)
# Set step values
self.scheduler.set_timesteps(num_inference_steps)
for t in self.progress_bar(self.scheduler.timesteps):
# Ensure the class labels are correctly broadcast to match the input tensor shape
model_output = self.unet(image, t, class_labels).sample
image = self.scheduler.step(model_output, t, image, generator=generator).prev_sample
image = (image / 2 + 0.5).clamp(0, 1)
image = image.cpu().permute(0, 2, 3, 1).numpy()
if output_type == "pil":
image = self.numpy_to_pil(image)
if not return_dict:
return (image,)
return ImagePipelineOutput(images=image)
def to(self, device: torch.device):
self._device = device
self.unet.to(device)
return self
def load_pipeline(repo_id, num_classes, device):
unet = UNet2DModel.from_pretrained(repo_id, subfolder="unet").to(device)
scheduler = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler")
pipeline = DDPMPipelinenew(unet=unet, scheduler=scheduler, num_classes=num_classes)
return pipeline.to(device) # Move the entire pipeline to the device
def save_images_locally(images, save_dir, epoch, class_label):
os.makedirs(save_dir, exist_ok=True)
for i, image in enumerate(images):
image_path = os.path.join(save_dir, f"image_epoch{epoch}_class{class_label}_idx{i}.png")
image.save(image_path)
def generate_images(pipeline, class_label, batch_size, num_inference_steps, save_dir, epoch):
generator = torch.Generator(device=pipeline._device).manual_seed(0)
class_labels = torch.tensor([class_label] * batch_size).to(pipeline._device)
images = pipeline(
generator=generator,
batch_size=batch_size,
num_inference_steps=num_inference_steps,
class_labels=class_labels,
output_type="pil",
).images
save_images_locally(images, save_dir, epoch, class_label)
return images
def create_image_grid(images, grid_size, save_path):
assert len(images) == grid_size ** 2, "Number of images must be equal to grid_size squared"
width, height = images[0].size
grid_img = Image.new('RGB', (grid_size * width, grid_size * height))
for i, image in enumerate(images):
x = i % grid_size * width
y = i // grid_size * height
grid_img.paste(image, (x, y))
grid_img.save(save_path)
return grid_img
if __name__ == "__main__":
repo_id = "Ketansomewhere/cifar10_conditional_diffusion1"
num_classes = 10 # Adjust to your number of classes
batch_size = 64
num_inference_steps = 1000 # Can be as low as 50 for faster generation
save_dir = "generated_images"
epoch = 0
grid_size = 8 # 8x8 grid
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pipeline = load_pipeline(repo_id, num_classes, device)
for class_label in range(num_classes):
images = generate_images(pipeline, class_label, batch_size, num_inference_steps, save_dir, epoch)
# Create and save the grid image
grid_img_path = os.path.join(save_dir, f"grid_image_class{class_label}.png")
grid_img = create_image_grid(images, grid_size, grid_img_path)
# Plot the grid image
plt.figure(figsize=(10, 10))
plt.imshow(grid_img)
plt.axis('off')
plt.title(f'Class {class_label}')
plt.savefig(os.path.join(save_dir, f"grid_image_class{class_label}.png"))
plt.show()
```
Also, check this nb for the above implementation *testing.ipynb* . |
jkmeng/data_labeling_demo | jkmeng | 2024-07-01T05:36:13Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T05:36:12Z | ---
license: apache-2.0
---
|
yjwon/zephyr_sft_dpo_beta5e-2_epoch3 | yjwon | 2024-07-01T05:41:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T05:36:41Z | Entry not found |
taehyunzzz/switch-base-32-samsum | taehyunzzz | 2024-07-01T11:10:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"switch_transformers",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/switch-base-32",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-01T05:37:03Z | ---
license: apache-2.0
base_model: google/switch-base-32
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: switch-base-32-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 48.5521
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switch-base-32-samsum
This model is a fine-tuned version of [google/switch-base-32](https://huggingface.co/google/switch-base-32) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3830
- Rouge1: 48.5521
- Rouge2: 25.5283
- Rougel: 40.8665
- Rougelsum: 44.9575
- Gen Len: 16.9144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 13.4215 | 0.1086 | 100 | 11.0472 | 20.1278 | 5.2521 | 17.5417 | 19.1564 | 18.5807 |
| 2.8392 | 0.2172 | 200 | 2.1007 | 38.3594 | 16.281 | 32.2365 | 35.4802 | 16.5599 |
| 2.4215 | 0.3257 | 300 | 1.7960 | 42.1238 | 19.355 | 35.3645 | 39.2556 | 16.2677 |
| 2.122 | 0.4343 | 400 | 1.6754 | 43.7744 | 20.6979 | 36.6416 | 40.6431 | 17.3716 |
| 2.0046 | 0.5429 | 500 | 1.5964 | 44.1887 | 21.1957 | 36.8047 | 40.8905 | 16.7249 |
| 1.9988 | 0.6515 | 600 | 1.5513 | 45.6737 | 21.9662 | 38.0672 | 42.3237 | 17.0293 |
| 1.868 | 0.7600 | 700 | 1.5133 | 45.549 | 21.791 | 37.9979 | 42.1384 | 16.2249 |
| 1.7934 | 0.8686 | 800 | 1.4904 | 45.6877 | 22.6099 | 38.4701 | 42.2678 | 16.2335 |
| 1.8638 | 0.9772 | 900 | 1.4783 | 46.2036 | 23.2629 | 39.2818 | 43.0232 | 16.2555 |
| 1.6739 | 1.0858 | 1000 | 1.4597 | 46.4896 | 23.2284 | 39.6004 | 43.1073 | 16.2335 |
| 1.6511 | 1.1944 | 1100 | 1.4717 | 46.3555 | 23.2062 | 39.0139 | 43.0476 | 17.0636 |
| 1.7472 | 1.3029 | 1200 | 1.4456 | 46.8039 | 23.0325 | 39.3688 | 43.267 | 16.9169 |
| 1.6646 | 1.4115 | 1300 | 1.4474 | 46.9795 | 23.8693 | 40.0189 | 43.5672 | 16.4095 |
| 1.7575 | 1.5201 | 1400 | 1.4313 | 47.0233 | 23.2824 | 39.4242 | 43.4246 | 17.1039 |
| 1.6169 | 1.6287 | 1500 | 1.4282 | 47.2462 | 23.6695 | 39.6043 | 43.575 | 16.6883 |
| 1.6276 | 1.7372 | 1600 | 1.4179 | 47.5435 | 24.1485 | 40.2526 | 44.2173 | 16.3386 |
| 1.5724 | 1.8458 | 1700 | 1.4148 | 47.709 | 24.1513 | 40.3054 | 44.3152 | 16.8716 |
| 1.6417 | 1.9544 | 1800 | 1.4070 | 47.711 | 24.3763 | 40.4776 | 44.1524 | 17.099 |
| 1.4839 | 2.0630 | 1900 | 1.4223 | 47.6921 | 24.5385 | 40.5104 | 44.2406 | 16.4535 |
| 1.4515 | 2.1716 | 2000 | 1.4060 | 48.0411 | 24.8227 | 40.9466 | 44.5028 | 16.6675 |
| 1.4827 | 2.2801 | 2100 | 1.4066 | 47.7 | 24.3622 | 40.2299 | 44.1456 | 17.0183 |
| 1.4776 | 2.3887 | 2200 | 1.4066 | 47.9768 | 24.7871 | 40.7986 | 44.5597 | 16.8178 |
| 1.4776 | 2.4973 | 2300 | 1.4017 | 47.9306 | 24.6758 | 40.4826 | 44.4696 | 17.2176 |
| 1.5189 | 2.6059 | 2400 | 1.4000 | 47.422 | 24.3336 | 40.0832 | 43.9033 | 16.5281 |
| 1.5369 | 2.7144 | 2500 | 1.3910 | 47.9702 | 24.7618 | 40.5049 | 44.4661 | 16.9046 |
| 1.4754 | 2.8230 | 2600 | 1.3915 | 48.0885 | 25.0111 | 41.0073 | 44.5462 | 16.3215 |
| 1.4609 | 2.9316 | 2700 | 1.3796 | 48.2953 | 25.1084 | 40.8045 | 44.8141 | 16.6883 |
| 1.2852 | 3.0402 | 2800 | 1.3914 | 48.1816 | 24.9564 | 40.4874 | 44.4959 | 16.6809 |
| 1.3426 | 3.1488 | 2900 | 1.3925 | 47.9864 | 25.1931 | 40.6587 | 44.3335 | 16.7457 |
| 1.342 | 3.2573 | 3000 | 1.3907 | 47.9714 | 25.0598 | 40.7272 | 44.4796 | 16.6663 |
| 1.3408 | 3.3659 | 3100 | 1.3876 | 47.9041 | 24.8444 | 40.4734 | 44.1852 | 17.0917 |
| 1.3964 | 3.4745 | 3200 | 1.3831 | 48.244 | 25.3169 | 40.7608 | 44.6435 | 16.846 |
| 1.2923 | 3.5831 | 3300 | 1.3872 | 48.1798 | 25.031 | 40.7752 | 44.7031 | 17.1149 |
| 1.3557 | 3.6916 | 3400 | 1.3797 | 48.4681 | 25.1391 | 40.7846 | 44.9196 | 16.8924 |
| 1.3749 | 3.8002 | 3500 | 1.3799 | 48.2949 | 25.3223 | 40.6975 | 44.7215 | 17.1785 |
| 1.3232 | 3.9088 | 3600 | 1.3761 | 48.2852 | 25.0934 | 40.7396 | 44.6782 | 16.8643 |
| 1.2519 | 4.0174 | 3700 | 1.3756 | 47.8744 | 24.8648 | 40.4524 | 44.4635 | 16.8631 |
| 1.1997 | 4.1260 | 3800 | 1.3859 | 48.6158 | 25.5093 | 41.1598 | 45.2168 | 16.9132 |
| 1.2544 | 4.2345 | 3900 | 1.3837 | 48.492 | 25.1007 | 40.7921 | 44.8931 | 17.0538 |
| 1.2808 | 4.3431 | 4000 | 1.3825 | 48.5394 | 25.5808 | 40.9153 | 44.9679 | 16.912 |
| 1.2971 | 4.4517 | 4100 | 1.3844 | 48.5203 | 25.4213 | 41.0222 | 45.0464 | 16.923 |
| 1.2563 | 4.5603 | 4200 | 1.3842 | 48.5428 | 25.7257 | 41.2674 | 45.0936 | 16.7531 |
| 1.2324 | 4.6688 | 4300 | 1.3828 | 48.6838 | 25.797 | 41.216 | 45.1151 | 16.8362 |
| 1.3399 | 4.7774 | 4400 | 1.3831 | 48.5336 | 25.5641 | 40.8484 | 44.9315 | 16.9523 |
| 1.3147 | 4.8860 | 4500 | 1.3823 | 48.5021 | 25.4093 | 40.8773 | 44.8717 | 16.8851 |
| 1.2837 | 4.9946 | 4600 | 1.3830 | 48.5521 | 25.5283 | 40.8665 | 44.9575 | 16.9144 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.2.0
- Datasets 2.14.5
- Tokenizers 0.19.1
|
baseten/btest-Mistral-7B-Instruct-v0.2-A100-2a115dae-TP2-lora | baseten | 2024-07-01T05:38:32Z | 0 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T05:37:23Z | Entry not found |
taehyunzzz/switch-base-32-xsum | taehyunzzz | 2024-07-01T05:38:11Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T05:38:11Z | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.