modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 12:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 498
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 12:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/WestOrcaMonarch-DPO-7B-GGUF | mradermacher | 2024-05-30T11:53:39Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"axolotl",
"en",
"base_model:jsfs11/WestOrcaMonarch-DPO-7B",
"base_model:quantized:jsfs11/WestOrcaMonarch-DPO-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T10:16:49Z | ---
base_model: jsfs11/WestOrcaMonarch-DPO-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jsfs11/WestOrcaMonarch-DPO-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/WestOrcaMonarch-DPO-7B-GGUF/resolve/main/WestOrcaMonarch-DPO-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
av-generation/t5-small-ve-oa-mine | av-generation | 2024-05-30T11:53:35Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T11:53:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
av-generation/t5-large-ag-oa-mine | av-generation | 2024-05-30T11:53:02Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T11:49:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Adriana213/distilbert-base-uncased-finetuned-clinc | Adriana213 | 2024-05-30T11:50:14Z | 111 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-30T11:29:47Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
datasets:
- clinc_oos
library_name: transformers
pipeline_tag: text-classification
---
# Transformer Efficiency and Knowledge Distillation
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7872
- Accuracy: 0.9206
## Model description
This setup involves benchmarking the performance of a fine-tuned BERT model (transformersbook/bert-base-uncased-finetuned-clinc) and applying knowledge distillation to train a smaller DistilBERT model. The BERT model is used for text classification tasks, and its efficiency is evaluated in terms of accuracy, model size, and latency. The DistilBERT model is trained to mimic the BERT model's performance while being more efficient.
## Intended uses & limitations
### Intended uses:
Evaluating the performance efficiency of transformer models.
Applying knowledge distillation to create smaller and faster models for text classification.
### Limitations:
The benchmark results are specific to the dataset used (CLINC150) and may not generalize to other datasets.
Knowledge distillation relies on the quality and performance of the teacher model.
## Training and evaluation data
The BERT model is fine-tuned on the CLINC150 dataset, which consists of labeled examples for intent classification. The dataset includes training, validation, and test splits.
## Training procedure
### Training and evaluation data
The BERT model is fine-tuned on the CLINC150 dataset, which consists of labeled examples for intent classification. The dataset includes training, validation, and test splits.
### Performance Benchmark
The performance of the BERT model is evaluated using the PerformanceBenchmark class, which measures accuracy, model size, and latency.
### Accuracy
The model's accuracy is computed on the test set of the CLINC150 dataset.
accuracy_score = load_metric("accuracy")
### Model Size
The size of the model is computed by saving its state dictionary to disk and measuring the file size in megabytes.
def compute_size(self):
state_dict = self.pipeline.model.state_dict()
tmp_path = Path("model.pt")
torch.save(state_dict, tmp_path)
size_mb = Path(tmp_path).stat().st_size / (1024 * 1024)
tmp_path.unlink()
return {"size_mb": size_mb}
### Latency
The average latency per query is measured over a sample of 100 queries.
def time_pipeline(self):
latencies = []
for example in self.dataset[:100]:
start_time = perf_counter()
_ = self.pipeline(example)
latency = perf_counter() - start_time
latencies.append(latency)
time_avg_ms = 1000 * np.mean(latencies)
time_std_ms = 1000 * np.std(latencies)
return {"time_avg_ms": time_avg_ms, "time_std_ms": time_std_ms}
### Knowledge Distillation
Knowledge distillation is used to train a smaller DistilBERT model using the predictions of the fine-tuned BERT model as soft labels.
### Distillation Process
Teacher Model: transformersbook/bert-base-uncased-finetuned-clinc
Student Model: distilbert-base-uncased
The distillation process involves computing a weighted average of the cross-entropy loss with the ground truth labels and the Kullback-Leibler divergence between the teacher and student model predictions.
class DistillationTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
outputs_stu = model(**inputs)
loss_ce = outputs_stu.loss
logits_stu = outputs_stu.logits
with torch.no_grad():
outputs_tea = self.teacher(**inputs)
logits_tea = outputs_tea.logits
loss_fct = nn.KLDivLoss(reduction="batchmean")
loss_kd = self.args.temperature ** 2 * loss_fct(
F.log_softmax(logits_stu / self.args.temperature, dim=-1),
F.softmax(logits_tea / self.args.temperature, dim=-1)
)
loss = self.args.alpha * loss_ce + (1. - self.args.alpha) * loss_kd
return (loss, outputs_stu) if return_outputs else loss
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2931 | 0.7255 |
| 3.8009 | 2.0 | 636 | 1.8849 | 0.8526 |
| 3.8009 | 3.0 | 954 | 1.1702 | 0.8897 |
| 1.7128 | 4.0 | 1272 | 0.8717 | 0.9145 |
| 0.9206 | 5.0 | 1590 | 0.7872 | 0.9206 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Haru4me/dql-BeamRiderNoFrameskip-v4_1 | Haru4me | 2024-05-30T11:48:37Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"BeamRiderNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-30T11:46:47Z | ---
library_name: stable-baselines3
tags:
- BeamRiderNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRiderNoFrameskip-v4
type: BeamRiderNoFrameskip-v4
metrics:
- type: mean_reward
value: 3956.20 +/- 1425.23
name: mean_reward
verified: false
---
# **DQN** Agent playing **BeamRiderNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BeamRiderNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga Haru4me -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga Haru4me -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/ -orga Haru4me
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
eeeyounglee/EEVE-10.8B-mean-4096-2 | eeeyounglee | 2024-05-30T11:47:57Z | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"llama",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-30T11:45:32Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# eeeyounglee/EEVE-10.8B-mean-4096-2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 4096 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('eeeyounglee/EEVE-10.8B-mean-4096-2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=eeeyounglee/EEVE-10.8B-mean-4096-2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 224 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`__main__.MultipleNegativesRankingLoss_with_logging`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 112,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: LlamaModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 4096, 'out_features': 4096, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
PrithviS/Reinforce-PoleCart | PrithviS | 2024-05-30T11:47:35Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-30T11:47:25Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PoleCart
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Bagus/hubert_xlarge_emodb | Bagus | 2024-05-30T11:45:24Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"generated_from_trainer",
"base_model:facebook/hubert-xlarge-ll60k",
"base_model:finetune:facebook/hubert-xlarge-ll60k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T05:05:20Z | ---
license: apache-2.0
base_model: facebook/hubert-xlarge-ll60k
tags:
- generated_from_trainer
model-index:
- name: hubert_xlarge_emodb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert_xlarge_emodb
This model is a fine-tuned version of [facebook/hubert-xlarge-ll60k](https://huggingface.co/facebook/hubert-xlarge-ll60k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8345
- Uar: 0.8889
- Acc: 0.9118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Uar | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 0.2 | 5 | 1.3815 | 0.25 | 0.1985 |
| No log | 0.39 | 10 | 1.3436 | 0.5285 | 0.5956 |
| No log | 0.59 | 15 | 1.3028 | 0.5741 | 0.6618 |
| No log | 0.78 | 20 | 1.2412 | 0.6019 | 0.6838 |
| No log | 0.98 | 25 | 1.1652 | 0.75 | 0.8015 |
| 1.2216 | 1.18 | 30 | 1.0883 | 0.7315 | 0.7868 |
| 1.2216 | 1.37 | 35 | 1.0309 | 0.75 | 0.8015 |
| 1.2216 | 1.57 | 40 | 1.0217 | 0.8335 | 0.8603 |
| 1.2216 | 1.76 | 45 | 1.0084 | 0.8714 | 0.8529 |
| 1.2216 | 1.96 | 50 | 0.9415 | 0.7778 | 0.8235 |
| 0.5781 | 2.16 | 55 | 0.9293 | 0.7870 | 0.8309 |
| 0.5781 | 2.35 | 60 | 0.8470 | 0.9448 | 0.9412 |
| 0.5781 | 2.55 | 65 | 0.8673 | 0.8333 | 0.8676 |
| 0.5781 | 2.75 | 70 | 0.8454 | 0.9074 | 0.9265 |
| 0.5781 | 2.94 | 75 | 0.8139 | 0.9167 | 0.9338 |
| 0.2652 | 3.14 | 80 | 0.8254 | 0.8981 | 0.9191 |
| 0.2652 | 3.33 | 85 | 0.8233 | 0.9074 | 0.9265 |
| 0.2652 | 3.53 | 90 | 0.7989 | 0.9259 | 0.9412 |
| 0.2652 | 3.73 | 95 | 0.7939 | 0.9584 | 0.9632 |
| 0.2652 | 3.92 | 100 | 0.8093 | 0.9167 | 0.9338 |
| 0.1537 | 4.12 | 105 | 0.8138 | 0.9167 | 0.9338 |
| 0.1537 | 4.31 | 110 | 0.7898 | 0.9539 | 0.9559 |
| 0.1537 | 4.51 | 115 | 0.8138 | 0.9074 | 0.9265 |
| 0.1537 | 4.71 | 120 | 0.8463 | 0.8704 | 0.8971 |
| 0.1537 | 4.9 | 125 | 0.8643 | 0.8519 | 0.8824 |
| 0.1615 | 5.1 | 130 | 0.8137 | 0.9074 | 0.9265 |
| 0.1615 | 5.29 | 135 | 0.7750 | 0.9724 | 0.9706 |
| 0.1615 | 5.49 | 140 | 0.7745 | 0.9724 | 0.9706 |
| 0.1615 | 5.69 | 145 | 0.8123 | 0.9074 | 0.9265 |
| 0.1615 | 5.88 | 150 | 0.8693 | 0.8426 | 0.875 |
| 0.0762 | 6.08 | 155 | 0.9067 | 0.7870 | 0.8309 |
| 0.0762 | 6.27 | 160 | 0.9123 | 0.7870 | 0.8309 |
| 0.0762 | 6.47 | 165 | 0.8664 | 0.8426 | 0.875 |
| 0.0762 | 6.67 | 170 | 0.8167 | 0.9074 | 0.9265 |
| 0.0762 | 6.86 | 175 | 0.8104 | 0.9259 | 0.9412 |
| 0.1321 | 7.06 | 180 | 0.8222 | 0.8981 | 0.9191 |
| 0.1321 | 7.25 | 185 | 0.8339 | 0.8889 | 0.9118 |
| 0.1321 | 7.45 | 190 | 0.8468 | 0.8704 | 0.8971 |
| 0.1321 | 7.65 | 195 | 0.8453 | 0.8704 | 0.8971 |
| 0.1321 | 7.84 | 200 | 0.8453 | 0.8704 | 0.8971 |
| 0.027 | 8.04 | 205 | 0.8346 | 0.8889 | 0.9118 |
| 0.027 | 8.24 | 210 | 0.8292 | 0.8889 | 0.9118 |
| 0.027 | 8.43 | 215 | 0.8276 | 0.8889 | 0.9118 |
| 0.027 | 8.63 | 220 | 0.8353 | 0.8889 | 0.9118 |
| 0.027 | 8.82 | 225 | 0.8376 | 0.8889 | 0.9118 |
| 0.0499 | 9.02 | 230 | 0.8327 | 0.8889 | 0.9118 |
| 0.0499 | 9.22 | 235 | 0.8317 | 0.8889 | 0.9118 |
| 0.0499 | 9.41 | 240 | 0.8330 | 0.8889 | 0.9118 |
| 0.0499 | 9.61 | 245 | 0.8343 | 0.8889 | 0.9118 |
| 0.0499 | 9.8 | 250 | 0.8345 | 0.8889 | 0.9118 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.13.3
|
Sersh/t2 | Sersh | 2024-05-30T11:45:16Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-70b-Instruct-bnb-4bit",
"region:us"
] | null | 2024-05-30T11:44:18Z | ---
library_name: peft
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
mradermacher/LLAMA3-8B-Coding-GGUF | mradermacher | 2024-05-30T11:45:14Z | 749 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:dinhlnd1610/LLAMA3-8B-Coding",
"base_model:quantized:dinhlnd1610/LLAMA3-8B-Coding",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-30T11:17:11Z | ---
base_model: dinhlnd1610/LLAMA3-8B-Coding
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/dinhlnd1610/LLAMA3-8B-Coding
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LLAMA3-8B-Coding-GGUF/resolve/main/LLAMA3-8B-Coding.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jiajunlong/TinyLLaVA-OpenELM-450M-SigLIP-0.89B | jiajunlong | 2024-05-30T11:43:04Z | 274 | 5 | transformers | [
"transformers",
"safetensors",
"tinyllava",
"text-generation",
"image-text-to-text",
"custom_code",
"arxiv:2402.14289",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | image-text-to-text | 2024-04-29T04:09:45Z | ---
license: apache-2.0
pipeline_tag: image-text-to-text
---
**<center><span style="font-size:2em;">TinyLLaVA</span></center>**
[](https://arxiv.org/abs/2402.14289)[](https://github.com/TinyLLaVA/TinyLLaVA_Factory)[](http://8843843nmph5.vicp.fun/#/)
TinyLLaVA has released a family of small-scale Large Multimodel Models(LMMs), ranging from 0.55B to 3.1B. Our best model, TinyLLaVA-Phi-2-SigLIP-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL.
### TinyLLaVA
Here, we introduce TinyLLaVA-OpenELM-450M-SigLIP-0.89B, which is trained by the [TinyLLaVA Factory](https://github.com/TinyLLaVA/TinyLLaVA_Factory) codebase. For LLM and vision tower, we choose [OpenELM-450M-Instruct](apple/OpenELM-450M-Instruct) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively. The dataset used for training this model is the The dataset used for training this model is the [LLaVA](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md) dataset.
### Usage
Execute the following test code:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
hf_path = 'jiajunlong/TinyLLaVA-OpenELM-450M-SigLIP-0.89B'
model = AutoModelForCausalLM.from_pretrained(hf_path, trust_remote_code=True)
model.cuda()
config = model.config
tokenizer = AutoTokenizer.from_pretrained(hf_path, use_fast=False, model_max_length = config.tokenizer_model_max_length,padding_side = config.tokenizer_padding_side)
prompt="What are these?"
image_url="http://images.cocodataset.org/test-stuff2017/000000000001.jpg"
output_text, genertaion_time = model.chat(prompt=prompt, image=image_url, tokenizer=tokenizer)
print('model output:', output_text)
print('runing time:', genertaion_time)
```
### Result
| model_name | gqa | textvqa | sqa | vqav2 | MME | MMB | MM-VET |
| :----------------------------------------------------------: | ----- | ------- | ----- | ----- | ------- | ----- | ------ |
| [TinyLLaVA-1.5B](https://huggingface.co/bczhou/TinyLLaVA-1.5B) | 60.3 | 51.7 | 60.3 | 76.9 | 1276.5 | 55.2 | 25.8 |
| [TinyLLaVA-0.89B](https://huggingface.co/jiajunlong/TinyLLaVA-OpenELM-450M-SigLIP-0.89B) | 53.87 | 44.02 | 54.09 | 71.74 | 1118.75 | 37.8 | 20 |
P.S. [TinyLLaVA Factory](https://github.com/TinyLLaVA/TinyLLaVA_Factory) is an open-source modular codebase for small-scale LMMs with a focus on simplicity of code implementations, extensibility of new features, and reproducibility of training results. This code repository provides standard training&evaluating pipelines, flexible data preprocessing&model configurations, and easily extensible architectures. Users can customize their own LMMs with minimal coding effort and less coding mistake.
TinyLLaVA Factory integrates a suite of cutting-edge models and methods.
- LLM currently supports OpenELM, TinyLlama, StableLM, Qwen, Gemma, and Phi.
- Vision tower currently supports CLIP, SigLIP, Dino, and combination of CLIP and Dino.
- Connector currently supports MLP, Qformer, and Resampler.
|
Sersh/t1 | Sersh | 2024-05-30T11:42:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-70b-Instruct-bnb-4bit",
"region:us"
] | null | 2024-05-30T11:42:25Z | ---
library_name: peft
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
jiajunlong/TinyLLaVA-OpenELM-450M-CLIP-0.55B | jiajunlong | 2024-05-30T11:38:51Z | 178 | 6 | transformers | [
"transformers",
"safetensors",
"text-generation",
"custom_code",
"arxiv:2402.14289",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-04-29T04:44:54Z | **<center><span style="font-size:2em;">TinyLLaVA</span></center>**
[](https://arxiv.org/abs/2402.14289)[](https://github.com/TinyLLaVA/TinyLLaVA_Factory)[](http://8843843nmph5.vicp.fun/#/)
TinyLLaVA has released a family of small-scale Large Multimodel Models(LMMs), ranging from 0.55B to 3.1B. Our best model, TinyLLaVA-Phi-2-SigLIP-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL.
### TinyLLaVA
Here, we introduce TinyLLaVA-OpenELM-450M-CLIP-0.55B, which is trained by the [TinyLLaVA Factory](https://github.com/TinyLLaVA/TinyLLaVA_Factory) codebase. For LLM and vision tower, we choose [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) and [clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16), respectively. The dataset used for training this model is the [LLaVA](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md) dataset.
### Usage
Execute the following test code:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
hf_path = 'jiajunlong/TinyLLaVA-OpenELM-450M-CLIP-0.55B'
model = AutoModelForCausalLM.from_pretrained(hf_path, trust_remote_code=True)
model.cuda()
config = model.config
tokenizer = AutoTokenizer.from_pretrained(hf_path, use_fast=False, model_max_length = config.tokenizer_model_max_length,padding_side = config.tokenizer_padding_side)
prompt="What are these?"
image_url="http://images.cocodataset.org/test-stuff2017/000000000001.jpg"
output_text, genertaion_time = model.chat(prompt=prompt, image=image_url, tokenizer=tokenizer)
print('model output:', output_text)
print('runing time:', genertaion_time)
```
### Result
| model_name | gqa | textvqa | sqa | vqav2 | MME | MMB | MM-VET |
| :----------------------------------------------------------: | ----- | ------- | ----- | ----- | ------- | ----- | ------ |
| [TinyLLaVA-1.5B](https://huggingface.co/bczhou/TinyLLaVA-1.5B) | 60.3 | 51.7 | 60.3 | 76.9 | 1276.5 | 55.2 | 25.8 |
| [TinyLLaVA-0.55B](https://huggingface.co/jiajunlong/TinyLLaVA-OpenELM-450M-CLIP-0.55B) | 50.38 | 36.37 | 50.02 | 65.44 | 1056.69 | 26.29 | 15.4 |
P.S. [TinyLLaVA Factory](https://github.com/TinyLLaVA/TinyLLaVA_Factory) is an open-source modular codebase for small-scale LMMs with a focus on simplicity of code implementations, extensibility of new features, and reproducibility of training results. This code repository provides standard training&evaluating pipelines, flexible data preprocessing&model configurations, and easily extensible architectures. Users can customize their own LMMs with minimal coding effort and less coding mistake.
TinyLLaVA Factory integrates a suite of cutting-edge models and methods.
- LLM currently supports OpenELM, TinyLlama, StableLM, Qwen, Gemma, and Phi.
- Vision tower currently supports CLIP, SigLIP, Dino, and combination of CLIP and Dino.
- Connector currently supports MLP, Qformer, and Resampler.
|
shyp/Hoshi_model | shyp | 2024-05-30T11:37:54Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-30T11:16:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
av-generation/t5-base-mlt-ae-110k | av-generation | 2024-05-30T11:36:13Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T11:35:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RohithN2004/Llamamodelfinetuning | RohithN2004 | 2024-05-30T11:33:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-30T11:23:57Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** RohithN2004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Reihaneh/wav2vec2_fy_common_voice_25 | Reihaneh | 2024-05-30T11:30:49Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-29T09:51:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
akshayjambhulkar/mistral-7b-finetuned-mental-health-conversational | akshayjambhulkar | 2024-05-30T11:28:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T11:28:06Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** beingjammy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
av-generation/t5-small-ag-ae-110k | av-generation | 2024-05-30T11:24:01Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T11:23:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adriansanz/te-zsc-hybrid | adriansanz | 2024-05-30T11:16:49Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:projecte-aina/roberta-base-ca-v2-cased-te",
"base_model:finetune:projecte-aina/roberta-base-ca-v2-cased-te",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-30T09:20:01Z | ---
license: apache-2.0
base_model: projecte-aina/roberta-base-ca-v2-cased-te
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: hib30_0524_epoch_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hib30_0524_epoch_4
This model is a fine-tuned version of [projecte-aina/roberta-base-ca-v2-cased-te](https://huggingface.co/projecte-aina/roberta-base-ca-v2-cased-te) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3876
- Accuracy: 0.955
- Precision: 0.9553
- Recall: 0.955
- F1: 0.9550
- Ratio: 0.487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 47
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- lr_scheduler_warmup_steps: 4
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Ratio |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----:|
| 0.3491 | 0.04 | 10 | 0.3923 | 0.951 | 0.9510 | 0.951 | 0.9510 | 0.495 |
| 0.3703 | 0.08 | 20 | 0.3979 | 0.954 | 0.9550 | 0.954 | 0.9540 | 0.476 |
| 0.3298 | 0.12 | 30 | 0.4131 | 0.95 | 0.9500 | 0.95 | 0.9500 | 0.498 |
| 0.3453 | 0.16 | 40 | 0.4259 | 0.948 | 0.9489 | 0.948 | 0.9480 | 0.478 |
| 0.3714 | 0.2 | 50 | 0.4134 | 0.951 | 0.9523 | 0.9510 | 0.9510 | 0.473 |
| 0.3345 | 0.24 | 60 | 0.4098 | 0.949 | 0.9490 | 0.949 | 0.9490 | 0.495 |
| 0.3626 | 0.28 | 70 | 0.3956 | 0.949 | 0.9490 | 0.949 | 0.9490 | 0.503 |
| 0.3712 | 0.32 | 80 | 0.3853 | 0.958 | 0.9587 | 0.958 | 0.9580 | 0.48 |
| 0.3403 | 0.36 | 90 | 0.3945 | 0.954 | 0.9542 | 0.954 | 0.9540 | 0.49 |
| 0.3592 | 0.4 | 100 | 0.4063 | 0.951 | 0.9510 | 0.951 | 0.9510 | 0.505 |
| 0.3839 | 0.44 | 110 | 0.3904 | 0.954 | 0.9552 | 0.954 | 0.9540 | 0.474 |
| 0.3685 | 0.48 | 120 | 0.3999 | 0.949 | 0.9512 | 0.9490 | 0.9489 | 0.465 |
| 0.368 | 0.52 | 130 | 0.3817 | 0.958 | 0.9583 | 0.958 | 0.9580 | 0.488 |
| 0.3658 | 0.56 | 140 | 0.3862 | 0.957 | 0.9572 | 0.957 | 0.9570 | 0.489 |
| 0.3752 | 0.6 | 150 | 0.4040 | 0.954 | 0.9561 | 0.954 | 0.9539 | 0.466 |
| 0.3376 | 0.64 | 160 | 0.3977 | 0.956 | 0.9572 | 0.956 | 0.9560 | 0.474 |
| 0.3531 | 0.68 | 170 | 0.3943 | 0.958 | 0.9587 | 0.958 | 0.9580 | 0.48 |
| 0.3433 | 0.72 | 180 | 0.4013 | 0.956 | 0.9576 | 0.956 | 0.9560 | 0.47 |
| 0.396 | 0.76 | 190 | 0.3928 | 0.955 | 0.9557 | 0.9550 | 0.9550 | 0.481 |
| 0.3993 | 0.8 | 200 | 0.3895 | 0.955 | 0.9555 | 0.955 | 0.9550 | 0.483 |
| 0.3738 | 0.84 | 210 | 0.3865 | 0.955 | 0.9553 | 0.955 | 0.9550 | 0.487 |
| 0.334 | 0.88 | 220 | 0.3872 | 0.954 | 0.9544 | 0.954 | 0.9540 | 0.486 |
| 0.4014 | 0.92 | 230 | 0.3880 | 0.955 | 0.9553 | 0.955 | 0.9550 | 0.487 |
| 0.4279 | 0.96 | 240 | 0.3878 | 0.955 | 0.9553 | 0.955 | 0.9550 | 0.487 |
| 0.358 | 1.0 | 250 | 0.3876 | 0.955 | 0.9553 | 0.955 | 0.9550 | 0.487 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
METRICS REPORT precision recall f1-score top1-score top2-score top3-score good1-score good2-score support
0 Aigües 1.000 0.960 0.980 0.960 0.960 1.000 0.960 0.960 25
1 Consum, comerç i mercats 0.852 0.920 0.885 0.920 1.000 1.000 1.000 1.000 25
2 Cultura 0.917 0.880 0.898 0.880 0.960 1.000 0.960 0.960 25
3 Economia 0.792 0.760 0.776 0.760 0.920 0.960 0.920 0.920 25
4 Educació 0.852 0.920 0.885 0.920 1.000 1.000 1.000 1.000 25
5 Enllumenat públic 0.920 0.920 0.920 0.920 1.000 1.000 1.000 1.000 25
6 Esports 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 25
7 Habitatge 0.667 0.800 0.727 0.800 0.840 0.880 0.840 0.840 25
8 Horta 0.913 0.840 0.875 0.840 0.960 1.000 0.920 0.920 25
9 Informació general 0.750 0.600 0.667 0.600 0.960 1.000 0.920 0.960 25
10 Informàtica 0.947 0.720 0.818 0.720 0.960 0.960 0.960 0.960 25
11 Joventut 0.913 0.840 0.875 0.840 1.000 1.000 1.000 1.000 25
12 Medi ambient 0.882 0.600 0.714 0.600 0.960 0.960 0.920 0.920 25
13 Neteja de la via pública 0.792 0.760 0.776 0.760 0.960 1.000 1.000 1.000 25
14 Salut pública i Cementiri 0.880 0.880 0.880 0.880 1.000 1.000 1.000 1.000 25
15 Seguretat 0.909 0.800 0.851 0.800 1.000 1.000 1.000 1.000 25
16 Serveis socials 0.857 0.960 0.906 0.960 1.000 1.000 1.000 1.000 25
17 Tramitacions 0.677 0.840 0.750 0.840 1.000 1.000 0.960 0.960 25
18 Urbanisme 0.864 0.760 0.809 0.760 0.880 0.920 0.920 0.920 25
19 Via pública i mobilitat 0.575 0.920 0.708 0.920 0.960 1.000 1.000 1.000 25
macro avg 0.848 0.834 0.835 0.834 0.966 0.984 0.964 0.966 500
weighted avg 0.848 0.834 0.835 0.834 0.966 0.984 0.964 0.966 500
accuracy 0.834
error rate 0.166
|
KimRina/Ko-BioMistral-7B-dare | KimRina | 2024-05-30T11:08:53Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:BioMistral/BioMistral-7B",
"base_model:merge:BioMistral/BioMistral-7B",
"base_model:davidkim205/komt-mistral-7b-v1",
"base_model:merge:davidkim205/komt-mistral-7b-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-30T10:41:45Z | ---
base_model:
- davidkim205/komt-mistral-7b-v1
- BioMistral/BioMistral-7B
library_name: transformers
tags:
- mergekit
- merge
---
# output_folder_dare
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1) as a base.
### Models Merged
The following models were included in the merge:
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: davidkim205/komt-mistral-7b-v1
- model: BioMistral/BioMistral-7B
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: davidkim205/komt-mistral-7b-v1
parameters:
int8_mask: true
dtype: bfloat16
```
|
PaceKW/24PDInsight-TextSummarization | PaceKW | 2024-05-30T11:07:47Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:panggi/t5-base-indonesian-summarization-cased",
"base_model:finetune:panggi/t5-base-indonesian-summarization-cased",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T11:07:03Z | ---
base_model: panggi/t5-base-indonesian-summarization-cased
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [panggi/t5-base-indonesian-summarization-cased](https://huggingface.co/panggi/t5-base-indonesian-summarization-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 0.9031 |
| No log | 2.0 | 10 | 0.7196 |
| No log | 3.0 | 15 | 0.6421 |
| No log | 4.0 | 20 | 0.6057 |
| No log | 5.0 | 25 | 0.5856 |
| No log | 6.0 | 30 | 0.5718 |
| No log | 7.0 | 35 | 0.5608 |
| No log | 8.0 | 40 | 0.5524 |
| No log | 9.0 | 45 | 0.5443 |
| No log | 10.0 | 50 | 0.5381 |
| No log | 11.0 | 55 | 0.5335 |
| No log | 12.0 | 60 | 0.5307 |
| No log | 13.0 | 65 | 0.5290 |
| No log | 14.0 | 70 | 0.5279 |
| No log | 15.0 | 75 | 0.5276 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
anil1002/unsloth_phi3-4bit_model | anil1002 | 2024-05-30T11:04:33Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2024-05-30T11:01:06Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ymlee/whisper-small-hi | ymlee | 2024-05-30T11:04:04Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-30T09:59:36Z | ---
language:
- hi
license: apache-2.0
tags:
- generated_from_trainer
base_model: openai/whisper-small
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: None
args: 'config: hi, split: test'
metrics:
- type: wer
value: 34.466265978159655
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2860
- Wer: 34.4663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.082 | 2.4450 | 1000 | 0.2860 | 34.4663 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1
|
SOUMYADEEPSAR/BERT_CLEF2024_task2_epoch1 | SOUMYADEEPSAR | 2024-05-30T11:04:02Z | 112 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-30T11:03:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Beeface/whisper-small-dv | Beeface | 2024-05-30T11:01:36Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ha",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-29T22:17:50Z | ---
language:
- ha
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small ha - Boniface Godwin
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: ha
split: test
args: ha
metrics:
- name: Wer
type: wer
value: 45.72845156369184
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ha - Boniface Godwin
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6885
- Wer Ortho: 48.6268
- Wer: 45.7285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.0751 | 3.1847 | 500 | 0.6885 | 48.6268 | 45.7285 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
onnx-community/yolov10n | onnx-community | 2024-05-30T11:00:17Z | 28 | 6 | transformers.js | [
"transformers.js",
"onnx",
"yolov10",
"object-detection",
"license:agpl-3.0",
"region:us"
] | object-detection | 2024-05-24T21:45:47Z | ---
library_name: transformers.js
pipeline_tag: object-detection
license: agpl-3.0
---
# YOLOv10: Real-Time End-to-End Object Detection
ONNX weights for https://github.com/THU-MIG/yolov10.
Latency-accuracy trade-offs | Size-accuracy trade-offs
:-------------------------:|:-------------------------:
 | 
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform object-detection.
```js
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model
const model = await AutoModel.from_pretrained('onnx-community/yolov10n', {
// quantized: false, // (Optional) Use unquantized version.
})
// Load processor
const processor = await AutoProcessor.from_pretrained('onnx-community/yolov10n');
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/city-streets.jpg';
const image = await RawImage.read(url);
const { pixel_values, reshaped_input_sizes } = await processor(image);
// Run object detection
const { output0 } = await model({ images: pixel_values });
const predictions = output0.tolist()[0];
const threshold = 0.5;
const [newHeight, newWidth] = reshaped_input_sizes[0]; // Reshaped height and width
const [xs, ys] = [image.width / newWidth, image.height / newHeight]; // x and y resize scales
for (const [xmin, ymin, xmax, ymax, score, id] of predictions) {
if (score < threshold) continue;
// Convert to original image coordinates
const bbox = [xmin * xs, ymin * ys, xmax * xs, ymax * ys].map(x => x.toFixed(2)).join(', ');
console.log(`Found "${model.config.id2label[id]}" at [${bbox}] with score ${score.toFixed(2)}.`);
}
// Found "car" at [559.30, 472.72, 799.58, 598.15] with score 0.95.
// Found "car" at [221.91, 422.56, 498.09, 521.85] with score 0.94.
// Found "bicycle" at [1.59, 646.99, 137.72, 730.35] with score 0.92.
// Found "bicycle" at [561.25, 593.65, 695.01, 671.73] with score 0.91.
// Found "person" at [687.74, 324.93, 739.70, 415.04] with score 0.89.
// ...
``` |
xyq019971/first | xyq019971 | 2024-05-30T10:59:26Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-30T10:54:56Z | ---
license: apache-2.0
---
|
onnx-community/yolov10m | onnx-community | 2024-05-30T10:58:57Z | 272 | 5 | transformers.js | [
"transformers.js",
"onnx",
"yolov10",
"object-detection",
"license:agpl-3.0",
"region:us"
] | object-detection | 2024-05-24T21:45:43Z | ---
library_name: transformers.js
pipeline_tag: object-detection
license: agpl-3.0
---
# YOLOv10: Real-Time End-to-End Object Detection
ONNX weights for https://github.com/THU-MIG/yolov10.
Latency-accuracy trade-offs | Size-accuracy trade-offs
:-------------------------:|:-------------------------:
 | 
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform object-detection.
```js
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model
const model = await AutoModel.from_pretrained('onnx-community/yolov10m', {
// quantized: false, // (Optional) Use unquantized version.
})
// Load processor
const processor = await AutoProcessor.from_pretrained('onnx-community/yolov10m');
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/city-streets.jpg';
const image = await RawImage.read(url);
const { pixel_values, reshaped_input_sizes } = await processor(image);
// Run object detection
const { output0 } = await model({ images: pixel_values });
const predictions = output0.tolist()[0];
const threshold = 0.5;
const [newHeight, newWidth] = reshaped_input_sizes[0]; // Reshaped height and width
const [xs, ys] = [image.width / newWidth, image.height / newHeight]; // x and y resize scales
for (const [xmin, ymin, xmax, ymax, score, id] of predictions) {
if (score < threshold) continue;
// Convert to original image coordinates
const bbox = [xmin * xs, ymin * ys, xmax * xs, ymax * ys].map(x => x.toFixed(2)).join(', ');
console.log(`Found "${model.config.id2label[id]}" at [${bbox}] with score ${score.toFixed(2)}.`);
}
// Found "car" at [559.30, 472.72, 799.58, 598.15] with score 0.95.
// Found "car" at [221.91, 422.56, 498.09, 521.85] with score 0.94.
// Found "bicycle" at [1.59, 646.99, 137.72, 730.35] with score 0.92.
// Found "bicycle" at [561.25, 593.65, 695.01, 671.73] with score 0.91.
// Found "person" at [687.74, 324.93, 739.70, 415.04] with score 0.89.
// ...
``` |
3lr3y/cahiernoir | 3lr3y | 2024-05-30T10:58:00Z | 0 | 0 | null | [
"text-generation",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-05-30T10:56:25Z | ---
license: apache-2.0
pipeline_tag: text-generation
--- |
Sadat07/phi-lamini-1_5 | Sadat07 | 2024-05-30T10:56:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T10:56:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phind-4869/ppo-LunarLander-v2 | phind-4869 | 2024-05-30T10:52:52Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-30T10:24:04Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 295.50 +/- 13.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mais99/my_awesome_model1 | Mais99 | 2024-05-30T10:52:33Z | 62 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-30T09:09:16Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Mais99/my_awesome_model1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Mais99/my_awesome_model1
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5903
- Validation Loss: 0.3487
- Train Accuracy: 0.862
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 310, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5903 | 0.3487 | 0.862 | 0 |
### Framework versions
- Transformers 4.41.1
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
xyq019971/23 | xyq019971 | 2024-05-30T10:48:53Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-29T09:08:25Z | ---
license: apache-2.0
---
|
harshh1307/dish_rec_mlm | harshh1307 | 2024-05-30T10:47:05Z | 183 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-30T10:11:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: dish_rec_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dish_rec_mlm
This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.383 | 1.0 | 1504 | 0.2941 |
| 0.2692 | 2.0 | 3008 | 0.2174 |
| 0.2273 | 3.0 | 4512 | 0.1860 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.2
- Tokenizers 0.13.3
|
reach-vb/Codestral-22B-v0.1-hf-Q8_0-GGUF | reach-vb | 2024-05-30T10:39:59Z | 0 | 0 | null | [
"gguf",
"code",
"llama-cpp",
"gguf-my-repo",
"license:other",
"region:us"
] | null | 2024-05-30T10:39:01Z | ---
language:
- code
license: other
tags:
- code
- llama-cpp
- gguf-my-repo
inference: false
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
---
# reach-vb/Codestral-22B-v0.1-hf-Q8_0-GGUF
This model was converted to GGUF format from [`bullerwins/Codestral-22B-v0.1-hf`](https://huggingface.co/bullerwins/Codestral-22B-v0.1-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bullerwins/Codestral-22B-v0.1-hf) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo reach-vb/Codestral-22B-v0.1-hf-Q8_0-GGUF --model codestral-22b-v0.1-hf-q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo reach-vb/Codestral-22B-v0.1-hf-Q8_0-GGUF --model codestral-22b-v0.1-hf-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m codestral-22b-v0.1-hf-q8_0.gguf -n 128
```
|
pastells/en-zh-test | pastells | 2024-05-30T10:39:48Z | 63 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"base_model:Helsinki-NLP/opus-mt-en-zh",
"base_model:finetune:Helsinki-NLP/opus-mt-en-zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T09:55:52Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-zh
tags:
- generated_from_keras_callback
model-index:
- name: pastells/en-zh-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pastells/en-zh-test
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.6747
- Validation Loss: 4.4216
- Train Bleu: 0.0097
- Train Gen Len: 100.1395
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:-----:|
| 4.4659 | 4.4875 | 0.0102 | 99.2093 | 0 |
| 4.2023 | 4.4382 | 0.0588 | 34.8372 | 1 |
| 4.0009 | 4.4255 | 0.0568 | 34.5116 | 2 |
| 3.8234 | 4.4239 | 0.0641 | 33.3488 | 3 |
| 3.6747 | 4.4216 | 0.0097 | 100.1395 | 4 |
### Framework versions
- Transformers 4.41.1
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AliE02/NaturalLanguagePioneersDPO | AliE02 | 2024-05-30T10:38:29Z | 151 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"education",
"conversational",
"custom_code",
"en",
"dataset:argilla/ultrafeedback-binarized-preferences-cleaned",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-30T07:40:01Z | ---
license: mit
datasets:
- argilla/ultrafeedback-binarized-preferences-cleaned
language:
- en
tags:
- education
--- |
HanJisu/distilbert-base-uncased-finetuned-emotion | HanJisu | 2024-05-30T10:36:33Z | 120 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-30T10:30:18Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9251247834824673
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2225
- Accuracy: 0.925
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8367 | 1.0 | 250 | 0.3265 | 0.904 | 0.9039 |
| 0.2548 | 2.0 | 500 | 0.2225 | 0.925 | 0.9251 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
LittleFish-Coder/fish_pix2pix | LittleFish-Coder | 2024-05-30T10:36:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-30T10:35:16Z | ---
license: apache-2.0
---
|
lamm-mit/Cephalo-Idefics-2-vision-8b-alpha | lamm-mit | 2024-05-30T10:33:47Z | 52 | 1 | transformers | [
"transformers",
"safetensors",
"idefics2",
"image-text-to-text",
"nlp",
"code",
"vision",
"chemistry",
"engineering",
"biology",
"bio-inspired",
"text-generation-inference",
"materials science",
"conversational",
"multilingual",
"arxiv:2405.19076",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-23T19:54:47Z | ---
language:
- multilingual
license: apache-2.0
library_name: transformers
tags:
- nlp
- code
- vision
- chemistry
- engineering
- biology
- bio-inspired
- text-generation-inference
- materials science
pipeline_tag: image-text-to-text
inference:
parameters:
temperature: 0.3
widget:
- messages:
- role: user
content: <|image_1|>Can you describe what you see in the image?
---
## Model Summary
Cephalo is a series of multimodal materials science focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction in human-AI or multi-agent AI frameworks.
A novel aspect of Cephalo's development is the innovative dataset generation method. The extraction process employs advanced algorithms to accurately detect and separate images and their corresponding textual descriptions from complex PDF documents. It involves extracting images and captions from PDFs to create well-reasoned image-text pairs, utilizing large language models (LLMs) for natural language processing. These image-text pairs are then refined and validated through LLM-based NLP processing, ensuring high-quality and contextually relevant data for training.
Cephalo can interpret complex visual scenes and generating contextually accurate language descriptions and answer queries.
The model is developed to process diverse inputs, including images and text, facilitating a broad range of applications such as image captioning, visual question answering, and multimodal content generation. The architecture combines a vision encoder model and an autoregressive transformer to process complex natural language understanding.

Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods.
This version of Cephalo, lamm-mit/Cephalo-Idefics-2-vision-8b-alpha, is based on the HuggingFaceM4/idefics2-8b-chatty model. The model was trained on a combination of scientific text-image data extracted from Wikipedia and scientific papers. For further details on the base model, see: https://huggingface.co/HuggingFaceM4/idefics2-8b-chatty. More details about technical aspects of the model, training and example applications to materials science problems are provided in the paper (reference at the bottom).
### Chat Format
The lamm-mit/Cephalo-Idefics-2-vision-8b-alpha is suiteable for one or more image inputs, wih prompts using the chat format as follows:
```raw
User: You carefully study the image, and respond accurately, but succinctly. Think step-by-step.
<image>What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI.<end_of_utterance>
Assistant:
```
where the model generates the text after `Assistant:` . For multi-turn conversations, the prompt should be formatted as follows:
```raw
User: You carefully study the image, and respond accurately, but succinctly. Think step-by-step.
<image>What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI.<end_of_utterance>
Assistant: The image depicts ants climbing a vertical surface using their legs and claws. This behavior is observed in nature and can inspire the design of multi-agent AI systems that mimic the coordinated movement of these insects. The relevance lies in the potential application of such systems in robotics and materials science, where efficient and adaptive movement is crucial.<end_of_utterance>
User: How could this be used to design a fracture resistant material?<end_of_utterance>
Assistant:
```
If you need to manually set the chat template:
```
IDEFICS2_CHAT_TEMPLATE = "{% for message in messages %}{{message['role'].capitalize()}}{% if message['content'][0]['type'] == 'image' %}{{':'}}{% else %}{{': '}}{% endif %}{% for line in message['content'] %}{% if line['type'] == 'text' %}{{line['text']}}{% elif line['type'] == 'image' %}{{ '<image>' }}{% endif %}{% endfor %}<end_of_utterance>\n{% endfor %}{% if add_generation_prompt %}{{ 'Assistant:' }}{% endif %}"
```
### Sample inference code
This code snippets show how to get quickly started on a GPU:
```python
from PIL import Image
import requests
DEVICE='cuda:0'
from transformers import AutoProcessor, Idefics2ForConditionalGeneration
from tqdm.notebook import tqdm
model_id='lamm-mit/Cephalo-Idefics-2-vision-8b-alpha'
model = Idefics2ForConditionalGeneration.from_pretrained( model_id,
torch_dtype=torch.bfloat16, #if your GPU allows
_attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed
trust_remote_code=True,
).to (DEVICE)
processor = AutoProcessor.from_pretrained(
f"{model_id}",
do_image_splitting=True
)
```
See section towards the end for more comments on model optimization, including quantization.
If you need to manually set the chat template:
```python
IDEFICS2_CHAT_TEMPLATE = "{% for message in messages %}{{message['role'].capitalize()}}{% if message['content'][0]['type'] == 'image' %}{{':'}}{% else %}{{': '}}{% endif %}{% for line in message['content'] %}{% if line['type'] == 'text' %}{{line['text']}}{% elif line['type'] == 'image' %}{{ '<image>' }}{% endif %}{% endfor %}<end_of_utterance>\n{% endfor %}{% if add_generation_prompt %}{{ 'Assistant:' }}{% endif %}"
tokenizer = AutoTokenizer.from_pretrained(base_model_id, use_fast=True)
tokenizer.chat_template = IDEFICS2_CHAT_TEMPLATE
processor.tokenizer = tokenizer
```
Simple inference example:
```
from transformers.image_utils import load_image
image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg")
# Create inputs
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."},
]
},
]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
# Get inputs using the processor
inputs = processor(text=prompt, images=[image], return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
# Generate
generated_ids = model.generate(**inputs, max_new_tokens=500)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_texts)
```
Next we provide a convenience function for inference. This function takes the model, processor, question, and images, along with messages and images objects for repeated chat-like interactions with the model.
```python
def ask_about_image (model, processor, question,
images_input=[],
verbatim=False,
temperature=0.1,
show_image=False,
system="You are a biomaterials scientist who responds accurately. ",
init_instr = "",
show_conversation=True,
max_new_tokens=256,
messages=[],
images=[],
use_Markdown=False,
):
query = question
images_input=ensure_list(images_input)
if len (images)==0:
if len (images_input)>0:
for image in tqdm (images_input) :
if is_url(image):
image= load_image(image)
images.append (image)
if show_image:
display ( image )
if len (messages)==0:
base_message = {
"role": "user",
"content": [
{"type": "text", "text": system + init_instr},
# Image messages will be added dynamically here
{"type": "text", "text": query}
]
}
# Ensure the images_input is a list
images_input = ensure_list(images_input)
# Add image messages dynamically
image_messages = [{"type": "image"} for _ in images_input]
base_message["content"][1:1] = image_messages # Insert image messages before the last text message
# Append the constructed message to messages list
messages.append(base_message)
else:
messages.append (
{
"role": "user",
"content": [
{"type": "text", "text": query
}
]
}
)
if verbatim:
print (messages)
text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=[text.strip()], images=images, return_tensors="pt", padding=True).to(DEVICE)
generated_ids = model.generate(**inputs, max_new_tokens=max_new_tokens, temperature=temperature, do_sample=True)
generated_texts = processor.batch_decode(generated_ids[:, inputs["input_ids"].size(1):], skip_special_tokens=True)
messages.append (
{
"role": "assistant",
"content": [ {"type": "text", "text": generated_texts[0]}, ]
}
)
formatted_conversation = format_conversation(messages, images)
# Display the formatted conversation, e.g. in Jupyter Notebook
if show_conversation:
if use_Markdown:
display(Markdown(formatted_conversation))
else:
display(HTML(formatted_conversation))
return generated_texts, messages, images
question = "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."
url1 = "https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg"
response, messages,images= ask_about_image ( model, processor, question,
images_input=[url1,],
temperature=0.1,
system= '', init_instr='You carefully study the image, and respond accurately, but succinctly. Think step-by-step.\n\n',
show_conversation=True,
max_new_tokens=512, messages=[], images=[])
```
Sample output:

<small>Image by [Vaishakh Manohar](https://www.quantamagazine.org/the-simple-algorithm-that-ants-use-to-build-bridges-20180226/)</small>
<pre style="white-space: pre-wrap;">
The image depicts a group of ants moving in a coordinated manner to climb a vertical surface. This behavior is known as cooperative climbing and involves the use of multiple agents working together to achieve a common goal. The relevance for materials design lies in the potential application of multi-agent AI in developing new materials with improved properties through the collaboration of multiple agents.
</pre>
## Dataset generation
The schematic below shows a visualization of the approach to generate datasets for training the vision model. The extraction process employs advanced algorithms to accurately detect and separate images and their corresponding textual descriptions from complex PDF documents. It involves extracting images and captions from PDFs to create well-reasoned image-text pairs, utilizing large language models (LLMs) for natural language processing. These image-text pairs are then refined and validated through LLM-based NLP processing, ensuring high-quality and contextually relevant data for training.
The image below shows reproductions of two representative pages of the scientific article (here, Spivak, Buehler, et al., 2011), and how they are used to extract visual scientific data for training the Cephalo model.

# Further model optimizations
If your GPU allows, load and run inference in half precision (`torch.float16` or `torch.bfloat16`).
```diff
model = AutoModelForVision2Seq.from_pretrained(
"lamm-mit/Cephalo-Idefics-2-vision-8b-alpha",
+ torch_dtype=torch.float16,
).to(DEVICE)
```
**Vision encoder efficiency**
Given the high resolution supported, the vision part of the model can be memory hungry depending on your configuration. If you are GPU-memory-constrained, you can:
- **deactivate the image splitting.** To do so, add `do_image_splitting=False` when initializing the processor (`AutoProcessor.from_pretrained`). There are no changes required on the model side. Note that only the sft model has been trained with image splitting.
- **decrease the maximum image resolution.** To do so, add `size= {"longest_edge": 448, "shortest_edge": 378}` when initializing the processor (`AutoProcessor.from_pretrained`). In particular, the `longest_edge` value can be adapted to fit the need (the default value is `980`). We recommend using values that are multiples of 14. There are no changes required on the model side.
`do_image_splitting=True` is especially needed to boost performance on complex tasks where a very large image is used as input. The model was fine-tuned with image splitting turned on. For simple tasks, this argument can be safely set to `False`.
**Using Flash-attention 2 to speed up generation**
<details><summary>Click to expand.</summary>
Mke sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) for the package installation. Simply change the snippet above with:
```diff
model = AutoModelForVision2Seq.from_pretrained(
"lamm-mit/Cephalo-Idefics-2-vision-8b-alpha",
+ torch_dtype=torch.bfloat16,
+ _attn_implementation="flash_attention_2",
).to(DEVICE)
```
</details>
**4 bit quantization with bitsandbytes**
<details><summary>Click to expand.</summary>
It is possible to load Idefics2 in 4bits with `bitsandbytes`. Make sure that you have `accelerate` and `bitsandbytes` installed.
```diff
+ from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForVision2Seq.from_pretrained(
"lamm-mit/Cephalo-Idefics-2-vision-8b-alpha",
+ torch_dtype=torch.bfloat16,
+ quantization_config=quantization_config,
).to(DEVICE)
```
</details>
## Citation
Please cite as:
```bibtex
@article{Buehler_Cephalo_2024,
title={Cephalo: Multi-Modal Vision-Language Models for Bio-Inspired Materials Analysis and Design},
author={Markus J. Buehler},
journal={arXiv preprint arXiv:2405.19076},
year={2024}
}
``` |
pankaj0507/my_model2 | pankaj0507 | 2024-05-30T10:32:47Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2024-05-30T10:32:45Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.3
model-index:
- name: my_model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_model2
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
GovindJo/pegasus-samsum | GovindJo | 2024-05-30T10:31:51Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T09:55:32Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6997 | 0.54 | 500 | 1.4834 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
av-generation/t5-large-ve-ae-110k | av-generation | 2024-05-30T10:31:41Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T10:18:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Prahas10/roof-shingles | Prahas10 | 2024-05-30T10:30:03Z | 22 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-384",
"base_model:finetune:google/vit-base-patch16-384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-30T07:03:46Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-384
tags:
- generated_from_keras_callback
model-index:
- name: Prahas10/roof-shingles
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Prahas10/roof-shingles
This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1015
- Validation Loss: 0.3231
- Train Accuracy: 0.9083
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4e-05, 'decay_steps': 138270, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.0001}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 3.8367 | 2.9703 | 0.4403 | 0 |
| 1.3092 | 1.6169 | 0.7093 | 1 |
| 0.4529 | 1.4414 | 0.7112 | 2 |
| 0.2229 | 0.8445 | 0.8368 | 3 |
| 0.1451 | 0.7074 | 0.8556 | 4 |
| 0.1053 | 0.8585 | 0.7992 | 5 |
| 0.1175 | 1.0721 | 0.7389 | 6 |
| 0.1388 | 0.5802 | 0.8542 | 7 |
| 0.0647 | 0.3764 | 0.9083 | 8 |
| 0.1049 | 1.0484 | 0.7366 | 9 |
| 0.0740 | 0.6191 | 0.8321 | 10 |
| 0.0816 | 0.6273 | 0.8283 | 11 |
| 0.0981 | 0.2901 | 0.9172 | 12 |
| 0.0614 | 0.5081 | 0.8523 | 13 |
| 0.0548 | 0.4983 | 0.8612 | 14 |
| 0.0652 | 0.8008 | 0.7850 | 15 |
| 0.0857 | 0.5845 | 0.8415 | 16 |
| 0.0847 | 0.6887 | 0.8184 | 17 |
| 0.0645 | 0.6104 | 0.8405 | 18 |
| 0.0891 | 0.4770 | 0.8532 | 19 |
| 0.0532 | 0.5074 | 0.8500 | 20 |
| 0.0483 | 0.8208 | 0.7850 | 21 |
| 0.0498 | 0.2679 | 0.9083 | 22 |
| 0.0406 | 0.3261 | 0.9036 | 23 |
| 0.0578 | 0.6373 | 0.8340 | 24 |
| 0.1010 | 0.5037 | 0.8481 | 25 |
| 0.0583 | 0.2993 | 0.8984 | 26 |
| 0.0398 | 0.1538 | 0.9492 | 27 |
| 0.0492 | 0.4397 | 0.8641 | 28 |
| 0.1015 | 0.3231 | 0.9083 | 29 |
### Framework versions
- Transformers 4.41.1
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
probejie/temp | probejie | 2024-05-30T10:28:11Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-29T17:56:17Z | ---
license: apache-2.0
---
|
Nogu-t/llama-3-8b-ver3_4 | Nogu-t | 2024-05-30T10:24:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T10:24:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Knobi3/ParalegalBeagle | Knobi3 | 2024-05-30T10:24:38Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-30T10:19:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phi0112358/llamafile-nous-hermes-2-mixtral | phi0112358 | 2024-05-30T10:18:42Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-30T10:18:41Z | ---
license: apache-2.0
---
|
av-generation/t5-base-ve-ae-110k | av-generation | 2024-05-30T10:17:12Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T10:16:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
av-generation/t5-small-ve-ae-110k | av-generation | 2024-05-30T10:15:57Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T10:15:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf | RichardErkhov | 2024-05-30T10:14:05Z | 36 | 0 | null | [
"gguf",
"arxiv:2311.17487",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-30T07:28:46Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Taiwan-LLM-7B-v2.0.1-chat - GGUF
- Model creator: https://huggingface.co/yentinglin/
- Original model: https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.0.1-chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Taiwan-LLM-7B-v2.0.1-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q2_K.gguf) | Q2_K | 2.36GB |
| [Taiwan-LLM-7B-v2.0.1-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Taiwan-LLM-7B-v2.0.1-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Taiwan-LLM-7B-v2.0.1-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q3_K.gguf) | Q3_K | 3.07GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Taiwan-LLM-7B-v2.0.1-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Taiwan-LLM-7B-v2.0.1-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q4_K.gguf) | Q4_K | 3.8GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q5_K.gguf) | Q5_K | 4.45GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q6_K.gguf) | Q6_K | 5.15GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
license: apache-2.0
language:
- zh
widget:
- text: >-
A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user's
questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Acknowledge license to accept the repository.
extra_gated_prompt: Please contact the author for access.
extra_gated_button_content: Acknowledge license 同意以上內容
extra_gated_fields:
Name: text
Mail: text
Organization: text
Country: text
Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox
使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者: checkbox
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/CmusIT5OlSXvFrbTJ7l-C.png" alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
# Model Card for Taiwan LLM 7B v2.0.1 chat
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning.
This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances.
It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance.
For detailed insights into Taiwan LLM's development and features, refer to our [technical report](https://github.com/MiuLab/Taiwan-LLaMa/blob/main/twllm_paper.pdf).
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw)
- **Finetuned from model:** [yentinglin/Taiwan-LLM-7B-v2.0-base](https://huggingface.co/yentinglin/yentinglin/Taiwan-LLM-7B-v2.0-base)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/MiuLab/Taiwan-LLaMa
- **Demo:** https://twllm.com/
## Performance

## Intended uses
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers>=4.34
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="yentinglin/Taiwan-LLM-7B-v2.0.1-chat", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "你是一個人工智慧助理",
},
{"role": "user", "content": "東北季風如何影響台灣氣候?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
### Training hyperparameters



The following hyperparameters were used during training:
- learning_rate: 5e-05
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5.0
## Citation
If you find Taiwan LLM is useful in your work, please cite it with:
```
@misc{lin2023taiwan,
title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model},
author={Yen-Ting Lin and Yun-Nung Chen},
year={2023},
eprint={2311.17487},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Acknowledgement
Taiwan LLM v2 is conducted in collaboration with [Ubitus K.K.](http://ubitus.net). Ubitus provides valuable compute resources for the project.
|
av-generation/t5-large-end2end-ae-110k | av-generation | 2024-05-30T10:13:39Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T10:11:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
av-generation/t5-base-end2end-ae-110k | av-generation | 2024-05-30T10:09:49Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T10:09:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF | mradermacher | 2024-05-30T10:08:30Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:saishf/SOVL-Mega-Mash-V2-L3-8B",
"base_model:quantized:saishf/SOVL-Mega-Mash-V2-L3-8B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T09:39:30Z | ---
base_model: saishf/SOVL-Mega-Mash-V2-L3-8B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/saishf/SOVL-Mega-Mash-V2-L3-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SOVL-Mega-Mash-V2-L3-8B-GGUF/resolve/main/SOVL-Mega-Mash-V2-L3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LiteLLMs/Codestral-22B-v0.1-GGUF | LiteLLMs | 2024-05-30T10:02:53Z | 16,899 | 1 | null | [
"gguf",
"code",
"GGUF",
"license:other",
"region:us"
] | null | 2024-05-30T09:37:21Z |
---
language:
- code
license: other
tags:
- code
- GGUF
inference: false
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
quantized_by: andrijdavid
---
# Codestral-22B-v0.1-GGUF
- Original model: [Codestral-22B-v0.1](https://huggingface.co/mistral-community/Codestral-22B-v0.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Codestral-22B-v0.1](https://huggingface.co/mistral-community/Codestral-22B-v0.1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Codestral-22B-v0.1-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Codestral-22B-v0.1-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Codestral-22B-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Codestral-22B-v0.1-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Codestral-22B-v0.1
# Model Card for Codestral-22B-v0.1
Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
- As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
- As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
## Inference
It's the same as Mistral 7B.
## Limitations
The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## License
Codestral-22B-v0.1 is released under the `MNLP-0.1` license.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
<!-- original-model-card end -->
|
cetusian/distilbert-ner-furniture-names | cetusian | 2024-05-30T09:58:40Z | 63 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-30T09:05:16Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: cetusian/distilbert-ner-furniture-names
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cetusian/distilbert-ner-furniture-names
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1626
- Validation Loss: 0.1549
- Train Precision: 0.0
- Train Recall: 0.0
- Train F1: 0.0
- Train Accuracy: 0.9466
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 27, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.2043 | 0.2022 | 0.0 | 0.0 | 0.0 | 0.9466 | 0 |
| 0.1626 | 0.1549 | 0.0 | 0.0 | 0.0 | 0.9466 | 1 |
### Framework versions
- Transformers 4.41.1
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half | lightblue | 2024-05-30T09:58:00Z | 7,825 | 16 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2405.18952",
"base_model:lightblue/suzume-llama-3-8B-multilingual",
"base_model:finetune:lightblue/suzume-llama-3-8B-multilingual",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-25T07:19:40Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
base_model: lightblue/suzume-llama-3-8B-multilingual
model-index:
- name: workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_half_borda
results: []
---
# Suzume ORPO
<p align="center">
<img width=500 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kWQSu02YfgYdUQqv4s5lq.png" alt="Suzume with Mitsu - a Japanese tree sparrow with honey on it"/>
</p>
[[Paper]](https://arxiv.org/abs/2405.18952) [[Dataset]](https://huggingface.co/datasets/lightblue/mitsu)
This is Suzume ORPO, an ORPO trained fine-tune of the [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) model using our [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset.
We have trained several versions of this model using ORPO and so recommend that you use the best performing model from our tests, [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half).
Note that this model has a non-commerical license as we used the Command R and Command R+ models to generate our training data for this model ([lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu)).
We are currently working on a developing a commerically usable model, so stay tuned for that!
# Model list
We have ORPO trained the following models using different proportions of the [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset:
* Trained on the top/bottom responses of all prompts in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full)
* Trained on the top/bottom responses of the prompts of the 75\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75)
* Trained on the top/bottom responses of the prompts of the 50\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half)
* Trained on the top/bottom responses of the prompts of the 25\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25)
# Model results
We compare the MT-Bench scores across 6 languages for our 4 ORPO trained models, as well as some baselines:
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - The foundation model that our models are ultimately built upon
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) - The highest performing open model on the Chatbot arena that is of a similar size to ours
* gpt-3.5-turbo - A fairly high quality (although not state-of-the-art) proprietary LLM
* [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) - The base model which we train our ORPO finetunes from
| **MT-Bench language** | **meta-llama/Meta-Llama-3-8B-Instruct** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** | **lightblue/suzume-llama-3-8B-multilingual** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25** |
|-----------------------|-----------------------------------------|-----------------------------------|-------------------|----------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------|
| **Chinese 🇨🇳** | NaN | 6.97 | 7.55 | 7.11 | 7.65 | **7.77** | 7.74 | 7.44 |
| **English 🇺🇸** | 7.98 | 7.92 | **8.26** | 7.73 | 7.98 | 7.94 | 7.98 | 8.22 |
| **French 🇫🇷** | NaN | 7.29 | 7.74 | 7.66 | **7.84** | 7.46 | 7.78 | 7.81 |
| **German 🇩🇪** | NaN | 6.99 | 7.68 | 7.26 | 7.28 | 7.64 | 7.7 | **7.71** |
| **Japanese 🇯🇵** | NaN | 6.22 | **7.84** | 6.56 | 7.2 | 7.12 | 7.34 | 7.04 |
| **Russian 🇷🇺** | NaN | 8.28 | 7.94 | 8.19 | 8.3 | 8.74 | **8.94** | 8.81 |
We can see noticable improvement on most languages compared to the base model. We also find that our ORPO models achieve the highest score out of all the models we evaluated for a number of languages.
# Training data
We trained this model using the [lightblue/mitsu_full_borda](https://huggingface.co/datasets/lightblue/mitsu_full_borda) dataset.
# Training configuration
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: lightblue/suzume-llama-3-8B-multilingual
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast
load_in_8bit: false
load_in_4bit: false
strict: false
rl: orpo
orpo_alpha: 0.1
remove_unused_columns: false
chat_template: chatml
datasets:
- path: lightblue/mitsu_tophalf_borda
type: orpo.chat_template
conversation: llama-3
dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual-orpo/prepared_mitsu_half_borda
val_set_size: 0.02
output_dir: /workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_half_borda
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: true
use_wandb: true
wandb_project: axolotl
wandb_entity: peterd
wandb_name: mitsu_half_borda
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 8e-6
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 20
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_half_borda
This model is a fine-tuned version of [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6299 | 0.02 | 1 | 7.7014 |
| 7.041 | 0.07 | 3 | 3.9786 |
| 0.6089 | 0.15 | 6 | 0.1393 |
| 0.1308 | 0.22 | 9 | 0.1244 |
| 0.1051 | 0.29 | 12 | 0.1112 |
| 0.1021 | 0.36 | 15 | 0.1063 |
| 0.0861 | 0.44 | 18 | 0.1026 |
| 0.1031 | 0.51 | 21 | 0.0979 |
| 0.0996 | 0.58 | 24 | 0.0967 |
| 0.0923 | 0.65 | 27 | 0.0960 |
| 0.1025 | 0.73 | 30 | 0.0944 |
| 0.1103 | 0.8 | 33 | 0.0939 |
| 0.0919 | 0.87 | 36 | 0.0937 |
| 0.104 | 0.94 | 39 | 0.0935 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
# How to cite
```tex
@article{devine2024sure,
title={Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets},
author={Devine, Peter},
journal={arXiv preprint arXiv:2405.18952},
year={2024}
}
```
# Developer
Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn)) |
lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25 | lightblue | 2024-05-30T09:57:34Z | 7,696 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2405.18952",
"base_model:lightblue/suzume-llama-3-8B-multilingual",
"base_model:finetune:lightblue/suzume-llama-3-8B-multilingual",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T02:47:58Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
base_model: lightblue/suzume-llama-3-8B-multilingual
model-index:
- name: workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_top25_borda
results: []
---
# Suzume ORPO
<p align="center">
<img width=500 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kWQSu02YfgYdUQqv4s5lq.png" alt="Suzume with Mitsu - a Japanese tree sparrow with honey on it"/>
</p>
[[Paper]](https://arxiv.org/abs/2405.18952) [[Dataset]](https://huggingface.co/datasets/lightblue/mitsu)
This is Suzume ORPO, an ORPO trained fine-tune of the [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) model using our [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset.
We have trained several versions of this model using ORPO and so recommend that you use the best performing model from our tests, [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half).
Note that this model has a non-commerical license as we used the Command R and Command R+ models to generate our training data for this model ([lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu)).
We are currently working on a developing a commerically usable model, so stay tuned for that!
# Model list
We have ORPO trained the following models using different proportions of the [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset:
* Trained on the top/bottom responses of all prompts in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full)
* Trained on the top/bottom responses of the prompts of the 75\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75)
* Trained on the top/bottom responses of the prompts of the 50\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half)
* Trained on the top/bottom responses of the prompts of the 25\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25)
# Model results
We compare the MT-Bench scores across 6 languages for our 4 ORPO trained models, as well as some baselines:
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - The foundation model that our models are ultimately built upon
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) - The highest performing open model on the Chatbot arena that is of a similar size to ours
* gpt-3.5-turbo - A fairly high quality (although not state-of-the-art) proprietary LLM
* [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) - The base model which we train our ORPO finetunes from
| **MT-Bench language** | **meta-llama/Meta-Llama-3-8B-Instruct** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** | **lightblue/suzume-llama-3-8B-multilingual** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25** |
|-----------------------|-----------------------------------------|-----------------------------------|-------------------|----------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------|
| **Chinese 🇨🇳** | NaN | 6.97 | 7.55 | 7.11 | 7.65 | **7.77** | 7.74 | 7.44 |
| **English 🇺🇸** | 7.98 | 7.92 | **8.26** | 7.73 | 7.98 | 7.94 | 7.98 | 8.22 |
| **French 🇫🇷** | NaN | 7.29 | 7.74 | 7.66 | **7.84** | 7.46 | 7.78 | 7.81 |
| **German 🇩🇪** | NaN | 6.99 | 7.68 | 7.26 | 7.28 | 7.64 | 7.7 | **7.71** |
| **Japanese 🇯🇵** | NaN | 6.22 | **7.84** | 6.56 | 7.2 | 7.12 | 7.34 | 7.04 |
| **Russian 🇷🇺** | NaN | 8.28 | 7.94 | 8.19 | 8.3 | 8.74 | **8.94** | 8.81 |
We can see noticable improvement on most languages compared to the base model. We also find that our ORPO models achieve the highest score out of all the models we evaluated for a number of languages.
# Training data
We trained this model using the [lightblue/mitsu_full_borda](https://huggingface.co/datasets/lightblue/mitsu_full_borda) dataset.
# Training configuration
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: lightblue/suzume-llama-3-8B-multilingual
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast
load_in_8bit: false
load_in_4bit: false
strict: false
rl: orpo
orpo_alpha: 0.1
remove_unused_columns: false
chat_template: chatml
datasets:
- path: lightblue/mitsu_top25_borda
type: orpo.chat_template
conversation: llama-3
dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual-orpo/prepared_mitsu_top25_borda
val_set_size: 0.02
output_dir: /workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_top25_borda
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: true
use_wandb: true
wandb_project: axolotl
wandb_entity: peterd
wandb_name: mitsu_top25_borda
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 8e-6
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 20
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_top25_borda
This model is a fine-tuned version of [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6328 | 0.05 | 1 | 7.7812 |
| 7.7158 | 0.1 | 2 | 7.2589 |
| 7.2588 | 0.15 | 3 | 4.0580 |
| 4.0068 | 0.19 | 4 | 2.4598 |
| 2.4438 | 0.24 | 5 | 0.6504 |
| 0.6586 | 0.29 | 6 | 0.1129 |
| 0.1235 | 0.34 | 7 | 0.1066 |
| 0.1273 | 0.39 | 8 | 0.1041 |
| 0.1076 | 0.44 | 9 | 0.0987 |
| 0.1009 | 0.48 | 10 | 0.0940 |
| 0.1172 | 0.53 | 11 | 0.0885 |
| 0.1016 | 0.58 | 12 | 0.0867 |
| 0.1088 | 0.63 | 13 | 0.0859 |
| 0.095 | 0.68 | 14 | 0.0846 |
| 0.1101 | 0.73 | 15 | 0.0839 |
| 0.0969 | 0.78 | 16 | 0.0832 |
| 0.0864 | 0.82 | 17 | 0.0825 |
| 0.0918 | 0.87 | 18 | 0.0821 |
| 0.0927 | 0.92 | 19 | 0.0819 |
| 0.0967 | 0.97 | 20 | 0.0818 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
# How to cite
```tex
@article{devine2024sure,
title={Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets},
author={Devine, Peter},
journal={arXiv preprint arXiv:2405.18952},
year={2024}
}
```
# Developer
Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn)) |
cmigozzi/test_model | cmigozzi | 2024-05-30T09:53:01Z | 147 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-30T09:48:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Twenty1/Mistal7B-text-to-cypher | Twenty1 | 2024-05-30T09:50:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T09:47:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
eightynine01/fewshot_5 | eightynine01 | 2024-05-30T09:49:53Z | 42 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"tinytimemixer",
"generated_from_trainer",
"base_model:ibm-granite/granite-timeseries-ttm-r1",
"base_model:finetune:ibm-granite/granite-timeseries-ttm-r1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T09:39:20Z | ---
license: apache-2.0
base_model: ibm/TTM
tags:
- generated_from_trainer
model-index:
- name: fewshot_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fewshot_5
This model is a fine-tuned version of [ibm/TTM](https://huggingface.co/ibm/TTM) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2328 | 1.0 | 24 | 0.0388 |
| 0.2256 | 2.0 | 48 | 0.0388 |
| 0.2207 | 3.0 | 72 | 0.0386 |
| 0.2165 | 4.0 | 96 | 0.0386 |
| 0.2132 | 5.0 | 120 | 0.0386 |
| 0.2084 | 6.0 | 144 | 0.0387 |
| 0.2033 | 7.0 | 168 | 0.0392 |
| 0.1971 | 8.0 | 192 | 0.0400 |
| 0.1911 | 9.0 | 216 | 0.0412 |
| 0.1836 | 10.0 | 240 | 0.0422 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
faizalbs777/mistral-finetuned-samsum | faizalbs777 | 2024-05-30T09:48:52Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-30T07:31:12Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
trungtienluong/experiments_23cau | trungtienluong | 2024-05-30T09:48:33Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:vilm/vinallama-7b-chat",
"base_model:adapter:vilm/vinallama-7b-chat",
"license:llama2",
"region:us"
] | null | 2024-05-27T06:53:28Z | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: vilm/vinallama-7b-chat
model-index:
- name: experiments_23cau
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# experiments_23cau
This model is a fine-tuned version of [vilm/vinallama-7b-chat](https://huggingface.co/vilm/vinallama-7b-chat) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.36.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2 |
thanhpx/vistral_finetune_25e_8k | thanhpx | 2024-05-30T09:45:01Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T09:44:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HyperdustProtocol/HyperAuto_v1.0 | HyperdustProtocol | 2024-05-30T09:41:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"base_model:finetune:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T09:41:32Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** HyperdustProtocol
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
anil1002/unsloth_phi3-4bit_gguf | anil1002 | 2024-05-30T09:41:34Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:quantized:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-30T09:40:18Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** anil1002
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HikariLight/Mistral_ACI_Bench_SFT | HikariLight | 2024-05-30T09:41:31Z | 52 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-30T09:14:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mergekit-community/TopEvolutionWiz | mergekit-community | 2024-05-30T09:40:20Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:lucyknada/microsoft_WizardLM-2-7B",
"base_model:merge:lucyknada/microsoft_WizardLM-2-7B",
"base_model:mergekit-community/TopEvolution",
"base_model:merge:mergekit-community/TopEvolution",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-30T09:33:26Z | ---
base_model:
- mergekit-community/TopEvolution
- lucyknada/microsoft_WizardLM-2-7B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [mergekit-community/TopEvolution](https://huggingface.co/mergekit-community/TopEvolution)
* [lucyknada/microsoft_WizardLM-2-7B](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: lucyknada/microsoft_WizardLM-2-7B
- model: mergekit-community/TopEvolution
merge_method: slerp
base_model: mergekit-community/TopEvolution
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
harshh1307/dish_rec_clm | harshh1307 | 2024-05-30T09:39:51Z | 224 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-07T11:19:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: dish_rec_clm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dish_rec_clm
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6101 | 1.0 | 1124 | 0.4599 |
| 0.4837 | 2.0 | 2248 | 0.3961 |
| 0.4429 | 3.0 | 3372 | 0.3795 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.2
- Tokenizers 0.13.3
|
dickdiss/phi-3_qlora_merged | dickdiss | 2024-05-30T09:35:54Z | 147 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-30T09:33:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abdulqadir02/Pegasus-fine-tuned | abdulqadir02 | 2024-05-30T09:35:49Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-30T09:33:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
manangarg/ind-llm-tokenizer | manangarg | 2024-05-30T09:35:41Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-01-27T16:24:59Z | ---
license: apache-2.0
---
• Indic Language LLM Tokenizer
- This is an indic language NLP tokenizer which is merged with LLaMA 2 tokenizer. |
kayfour/Llama-3-kayfour-Ko-8B | kayfour | 2024-05-30T09:33:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:beomi/Llama-3-Open-Ko-8B",
"base_model:adapter:beomi/Llama-3-Open-Ko-8B",
"region:us"
] | null | 2024-05-30T09:01:52Z | ---
library_name: peft
base_model: beomi/Llama-3-Open-Ko-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
mergekit-community/TopEvolution-DPO-32K | mergekit-community | 2024-05-30T09:32:40Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:mergekit-community/TopEvolution",
"base_model:merge:mergekit-community/TopEvolution",
"base_model:mpasila/Kunoichi-DPO-v2-Instruct-32k-7B",
"base_model:merge:mpasila/Kunoichi-DPO-v2-Instruct-32k-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-30T09:26:19Z | ---
base_model:
- mergekit-community/TopEvolution
- mpasila/Kunoichi-DPO-v2-Instruct-32k-7B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [mergekit-community/TopEvolution](https://huggingface.co/mergekit-community/TopEvolution)
* [mpasila/Kunoichi-DPO-v2-Instruct-32k-7B](https://huggingface.co/mpasila/Kunoichi-DPO-v2-Instruct-32k-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mpasila/Kunoichi-DPO-v2-Instruct-32k-7B
- model: mergekit-community/TopEvolution
merge_method: slerp
base_model: mergekit-community/TopEvolution
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
Zihao995/gemma-chinese | Zihao995 | 2024-05-30T09:30:01Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-29T06:07:28Z | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- generator
model-index:
- name: gemma-chinese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-chinese
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 |
mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF | mradermacher | 2024-05-30T09:28:19Z | 28 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"ChaoticNeutrals/RP_Vision_7B",
"ResplendentAI/DaturaCookie_7B",
"BioMistral/BioMistral-DARE-NS",
"MaziyarPanahi/Mistral-7B-Instruct-v0.3",
"en",
"base_model:jsfs11/MixtureofMerges-MoE-4x7bRP-v11",
"base_model:quantized:jsfs11/MixtureofMerges-MoE-4x7bRP-v11",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-30T06:01:05Z | ---
base_model: jsfs11/MixtureofMerges-MoE-4x7bRP-v11
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- ChaoticNeutrals/RP_Vision_7B
- ResplendentAI/DaturaCookie_7B
- BioMistral/BioMistral-DARE-NS
- MaziyarPanahi/Mistral-7B-Instruct-v0.3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jsfs11/MixtureofMerges-MoE-4x7bRP-v11
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.Q2_K.gguf) | Q2_K | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.IQ3_XS.gguf) | IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.IQ3_S.gguf) | IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.IQ3_M.gguf) | IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.Q3_K_M.gguf) | Q3_K_M | 11.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.Q3_K_L.gguf) | Q3_K_L | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.IQ4_XS.gguf) | IQ4_XS | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.Q4_K_S.gguf) | Q4_K_S | 13.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.Q4_K_M.gguf) | Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.Q5_K_S.gguf) | Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.Q5_K_M.gguf) | Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.Q6_K.gguf) | Q6_K | 19.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MixtureofMerges-MoE-4x7bRP-v11-GGUF/resolve/main/MixtureofMerges-MoE-4x7bRP-v11.Q8_0.gguf) | Q8_0 | 25.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-5_0bpw_exl2 | Zoyd | 2024-05-30T09:18:43Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-30T03:33:51Z | ---
library_name: transformers
license: llama3
---
**Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-2_5bpw_exl2)**</center> | <center>23198 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-3_0bpw_exl2)**</center> | <center>27278 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-3_5bpw_exl2)**</center> | <center>31361 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-3_75bpw_exl2)**</center> | <center>33398 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-4_0bpw_exl2)**</center> | <center>35427 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-4_25bpw_exl2)**</center> | <center>37476 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-5_0bpw_exl2)**</center> | <center>43565 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-6_0bpw_exl2)**</center> | <center>51837 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-6_5bpw_exl2)**</center> | <center>56044 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-8_0bpw_exl2)**</center> | <center>63001 MB</center> | <center>8</center> |
# Llama-3-70B-Instruct-abliterated-v3.5 Model Card
[My original Jupyter "cookbook" to replicate the methodology can be found here](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb)
[My personal library o' code used](https://github.com/FailSpy/abliterator) (WIP, looking to improve and generalize)
This is [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.
## V3.5?
Second try. I felt that the V3 methodology of 70B wasn't well applied, and u/Nexesenex on reddit kinda confirmed my suspicions. So go blame them. :P
This one has only a single layer modified(!) and that seems to have completely eliminated moralizing disclaimers.
I hope you'll find this model better than 70B-V3! As well, this also fixes the tokenizer.
## Hang on, "abliteration"? Orthogonalization? Ablation? What is this?
TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out.
**TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.**
As far as "abliteration": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes.
Ablate + obliterated = Abliterated
Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization.
## A little more on the methodology, and why this is interesting
To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt.
Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights.
> Why this over fine-tuning?
Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage.
As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.)
Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques.
It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa.
I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity.
> Okay, fine, but why V3? There's no V2 70B?
Well, I released a V2 a while back for 8B under Cognitive Computations.
It ended up being not worth it to try V2 with 70B, I wanted to refine the model before wasting compute cycles on what might not even be a better model.
I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations.
So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.)
## Quirkiness awareness notice
This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects.
If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored.
Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
|
gaianet/Codestral-22B-v0.1-GGUF | gaianet | 2024-05-30T09:16:15Z | 442 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"code",
"base_model:mistralai/Codestral-22B-v0.1",
"base_model:quantized:mistralai/Codestral-22B-v0.1",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-05-30T06:24:25Z | ---
license: other
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
model_name: Codestral-22B-v0.1
base_model: mistralai/Codestral-22B-v0.1
inference: false
model_creator: mistralai
quantized_by: Second State Inc.
tags:
- code
language:
- code
---

# Codestral-22B-v0.1-GGUF
## Original Model
[mistralai/Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)
## Run with Gaianet
**Prompt template**
prompt template: `mistral-instruct`
**Context size**
chat_ctx_size: `32000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Codestral-22B-v0.1-hf-Q2_K.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q2_K.gguf) | Q2_K | 2 | 8.27 GB| smallest, significant quality loss - not recommended for most purposes |
| [Codestral-22B-v0.1-hf-Q3_K_L.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q3_K_L.gguf) | Q3_K_L | 3 | 11.7 GB| small, substantial quality loss |
| [Codestral-22B-v0.1-hf-Q3_K_M.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q3_K_M.gguf) | Q3_K_M | 3 | 10.8 GB| very small, high quality loss |
| [Codestral-22B-v0.1-hf-Q3_K_S.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q3_K_S.gguf) | Q3_K_S | 3 | 9.64 GB| very small, high quality loss |
| [Codestral-22B-v0.1-hf-Q4_0.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q4_0.gguf) | Q4_0 | 4 | 12.6 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Codestral-22B-v0.1-hf-Q4_K_M.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q4_K_M.gguf) | Q4_K_M | 4 | 13.3 GB| medium, balanced quality - recommended |
| [Codestral-22B-v0.1-hf-Q4_K_S.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q4_K_S.gguf) | Q4_K_S | 4 | 12.7 GB| small, greater quality loss |
| [Codestral-22B-v0.1-hf-Q5_0.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q5_0.gguf) | Q5_0 | 5 | 15.3 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Codestral-22B-v0.1-hf-Q5_K_M.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q5_K_M.gguf) | Q5_K_M | 5 | 15.7 GB| large, very low quality loss - recommended |
| [Codestral-22B-v0.1-hf-Q5_K_S.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q5_K_S.gguf) | Q5_K_S | 5 | 15.3 GB| large, low quality loss - recommended |
| [Codestral-22B-v0.1-hf-Q6_K.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q6_K.gguf) | Q6_K | 6 | 18.3 GB| very large, extremely low quality loss |
| [Codestral-22B-v0.1-hf-Q8_0.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q8_0.gguf) | Q8_0 | 8 | 23.6 GB| very large, extremely low quality loss - not recommended |
| [Codestral-22B-v0.1-hf-f16.gguf](https://huggingface.co/gaianet/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-f16.gguf) | f16 | 16 | 44.5 GB| |
*Quantized with llama.cpp b3030.*
|
second-state/Codestral-22B-v0.1-GGUF | second-state | 2024-05-30T09:15:50Z | 271 | 1 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"code",
"base_model:mistralai/Codestral-22B-v0.1",
"base_model:quantized:mistralai/Codestral-22B-v0.1",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-05-30T06:01:37Z | ---
license: other
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
model_name: Codestral-22B-v0.1
base_model: mistralai/Codestral-22B-v0.1
inference: false
model_creator: mistralai
quantized_by: Second State Inc.
tags:
- code
language:
- code
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Codestral-22B-v0.1-GGUF
## Original Model
[mistralai/Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)
## Run with LlamaEdge
- LlamaEdge version: [v0.11.2](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.11.2)
- Prompt template
- Prompt type: `mistral-instruct`
- Prompt string
```text
<s>[INST] {prompt} [/INST]
```
- Context size: `32000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Codestral-22B-v0.1-hf-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template mistral-instruct \
--ctx-size 32000 \
--model-name Codestral-22B-v0.1
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Codestral-22B-v0.1-hf-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template mistral-instruct \
--ctx-size 32000
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Codestral-22B-v0.1-hf-Q2_K.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q2_K.gguf) | Q2_K | 2 | 8.27 GB| smallest, significant quality loss - not recommended for most purposes |
| [Codestral-22B-v0.1-hf-Q3_K_L.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q3_K_L.gguf) | Q3_K_L | 3 | 11.7 GB| small, substantial quality loss |
| [Codestral-22B-v0.1-hf-Q3_K_M.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q3_K_M.gguf) | Q3_K_M | 3 | 10.8 GB| very small, high quality loss |
| [Codestral-22B-v0.1-hf-Q3_K_S.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q3_K_S.gguf) | Q3_K_S | 3 | 9.64 GB| very small, high quality loss |
| [Codestral-22B-v0.1-hf-Q4_0.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q4_0.gguf) | Q4_0 | 4 | 12.6 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Codestral-22B-v0.1-hf-Q4_K_M.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q4_K_M.gguf) | Q4_K_M | 4 | 13.3 GB| medium, balanced quality - recommended |
| [Codestral-22B-v0.1-hf-Q4_K_S.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q4_K_S.gguf) | Q4_K_S | 4 | 12.7 GB| small, greater quality loss |
| [Codestral-22B-v0.1-hf-Q5_0.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q5_0.gguf) | Q5_0 | 5 | 15.3 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Codestral-22B-v0.1-hf-Q5_K_M.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q5_K_M.gguf) | Q5_K_M | 5 | 15.7 GB| large, very low quality loss - recommended |
| [Codestral-22B-v0.1-hf-Q5_K_S.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q5_K_S.gguf) | Q5_K_S | 5 | 15.3 GB| large, low quality loss - recommended |
| [Codestral-22B-v0.1-hf-Q6_K.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q6_K.gguf) | Q6_K | 6 | 18.3 GB| very large, extremely low quality loss |
| [Codestral-22B-v0.1-hf-Q8_0.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-Q8_0.gguf) | Q8_0 | 8 | 23.6 GB| very large, extremely low quality loss - not recommended |
| [Codestral-22B-v0.1-hf-f16.gguf](https://huggingface.co/second-state/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-hf-f16.gguf) | f16 | 16 | 44.5 GB| |
*Quantized with llama.cpp b3030.*
|
jkim40/videomae-base-finetuned-ucf101-subset | jkim40 | 2024-05-30T09:10:43Z | 64 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-05-30T08:49:08Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1136
- Accuracy: 0.9714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 148
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.2221 | 0.2568 | 38 | 0.5035 | 0.7714 |
| 0.2566 | 1.2568 | 76 | 0.5705 | 0.8 |
| 0.0213 | 2.2568 | 114 | 0.0961 | 0.9857 |
| 0.0639 | 3.2297 | 148 | 0.1136 | 0.9714 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Sangto/gemma-1.1-7b-it-Q4_K_M-GGUF | Sangto | 2024-05-30T09:07:23Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-30T09:07:08Z | ---
license: gemma
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: How does the brain work?
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Sangto/gemma-1.1-7b-it-Q4_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-1.1-7b-it`](https://huggingface.co/google/gemma-1.1-7b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-1.1-7b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Sangto/gemma-1.1-7b-it-Q4_K_M-GGUF --model gemma-1.1-7b-it-q4_k_m.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Sangto/gemma-1.1-7b-it-Q4_K_M-GGUF --model gemma-1.1-7b-it-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m gemma-1.1-7b-it-q4_k_m.gguf -n 128
```
|
Classical/Yinka | Classical | 2024-05-30T09:06:41Z | 537 | 17 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"mteb",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-30T08:40:46Z | ---
tags:
- mteb
model-index:
- name: checkpoint-1431
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 56.306314279047875
- type: cos_sim_spearman
value: 61.020227685004016
- type: euclidean_pearson
value: 58.61821670933433
- type: euclidean_spearman
value: 60.131457106640674
- type: manhattan_pearson
value: 58.6189460369694
- type: manhattan_spearman
value: 60.126350618526224
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 55.8612958476143
- type: cos_sim_spearman
value: 59.01977664864512
- type: euclidean_pearson
value: 62.028094897243655
- type: euclidean_spearman
value: 58.6046814257705
- type: manhattan_pearson
value: 62.02580042431887
- type: manhattan_spearman
value: 58.60626890004892
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.496
- type: f1
value: 46.673963383873065
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 70.73971622592535
- type: cos_sim_spearman
value: 72.76102992060764
- type: euclidean_pearson
value: 71.04525865868672
- type: euclidean_spearman
value: 72.4032852155075
- type: manhattan_pearson
value: 71.03693009336658
- type: manhattan_spearman
value: 72.39635701224252
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 56.34751074520767
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 48.4856662121073
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 89.26384109024997
- type: mrr
value: 91.27261904761905
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 90.0464058154547
- type: mrr
value: 92.06480158730159
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 27.236
- type: map_at_10
value: 40.778
- type: map_at_100
value: 42.692
- type: map_at_1000
value: 42.787
- type: map_at_3
value: 36.362
- type: map_at_5
value: 38.839
- type: mrr_at_1
value: 41.335
- type: mrr_at_10
value: 49.867
- type: mrr_at_100
value: 50.812999999999995
- type: mrr_at_1000
value: 50.848000000000006
- type: mrr_at_3
value: 47.354
- type: mrr_at_5
value: 48.718
- type: ndcg_at_1
value: 41.335
- type: ndcg_at_10
value: 47.642
- type: ndcg_at_100
value: 54.855
- type: ndcg_at_1000
value: 56.449000000000005
- type: ndcg_at_3
value: 42.203
- type: ndcg_at_5
value: 44.416
- type: precision_at_1
value: 41.335
- type: precision_at_10
value: 10.568
- type: precision_at_100
value: 1.6400000000000001
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.998
- type: precision_at_5
value: 17.389
- type: recall_at_1
value: 27.236
- type: recall_at_10
value: 58.80800000000001
- type: recall_at_100
value: 88.411
- type: recall_at_1000
value: 99.032
- type: recall_at_3
value: 42.253
- type: recall_at_5
value: 49.118
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.03728202044498
- type: cos_sim_ap
value: 92.49469583272597
- type: cos_sim_f1
value: 86.74095974528088
- type: cos_sim_precision
value: 84.43657294664601
- type: cos_sim_recall
value: 89.17465513210195
- type: dot_accuracy
value: 72.21888153938664
- type: dot_ap
value: 80.59377163340332
- type: dot_f1
value: 74.96686040583258
- type: dot_precision
value: 66.4737793851718
- type: dot_recall
value: 85.94809445873275
- type: euclidean_accuracy
value: 85.47203848466627
- type: euclidean_ap
value: 91.89152584749868
- type: euclidean_f1
value: 86.38105975197294
- type: euclidean_precision
value: 83.40953625081646
- type: euclidean_recall
value: 89.5721299976619
- type: manhattan_accuracy
value: 85.3758268190018
- type: manhattan_ap
value: 91.88989707722311
- type: manhattan_f1
value: 86.39767519839052
- type: manhattan_precision
value: 82.76231263383298
- type: manhattan_recall
value: 90.36707972878185
- type: max_accuracy
value: 86.03728202044498
- type: max_ap
value: 92.49469583272597
- type: max_f1
value: 86.74095974528088
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 74.34100000000001
- type: map_at_10
value: 82.49499999999999
- type: map_at_100
value: 82.64200000000001
- type: map_at_1000
value: 82.643
- type: map_at_3
value: 81.142
- type: map_at_5
value: 81.95400000000001
- type: mrr_at_1
value: 74.71
- type: mrr_at_10
value: 82.553
- type: mrr_at_100
value: 82.699
- type: mrr_at_1000
value: 82.70100000000001
- type: mrr_at_3
value: 81.279
- type: mrr_at_5
value: 82.069
- type: ndcg_at_1
value: 74.605
- type: ndcg_at_10
value: 85.946
- type: ndcg_at_100
value: 86.607
- type: ndcg_at_1000
value: 86.669
- type: ndcg_at_3
value: 83.263
- type: ndcg_at_5
value: 84.71600000000001
- type: precision_at_1
value: 74.605
- type: precision_at_10
value: 9.758
- type: precision_at_100
value: 1.005
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 29.996000000000002
- type: precision_at_5
value: 18.736
- type: recall_at_1
value: 74.34100000000001
- type: recall_at_10
value: 96.523
- type: recall_at_100
value: 99.473
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 92.83500000000001
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.950000000000003
- type: map_at_10
value: 82.408
- type: map_at_100
value: 85.057
- type: map_at_1000
value: 85.09100000000001
- type: map_at_3
value: 57.635999999999996
- type: map_at_5
value: 72.48
- type: mrr_at_1
value: 92.15
- type: mrr_at_10
value: 94.554
- type: mrr_at_100
value: 94.608
- type: mrr_at_1000
value: 94.61
- type: mrr_at_3
value: 94.292
- type: mrr_at_5
value: 94.459
- type: ndcg_at_1
value: 92.15
- type: ndcg_at_10
value: 89.108
- type: ndcg_at_100
value: 91.525
- type: ndcg_at_1000
value: 91.82900000000001
- type: ndcg_at_3
value: 88.44
- type: ndcg_at_5
value: 87.271
- type: precision_at_1
value: 92.15
- type: precision_at_10
value: 42.29
- type: precision_at_100
value: 4.812
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 79.14999999999999
- type: precision_at_5
value: 66.64
- type: recall_at_1
value: 26.950000000000003
- type: recall_at_10
value: 89.832
- type: recall_at_100
value: 97.921
- type: recall_at_1000
value: 99.471
- type: recall_at_3
value: 59.562000000000005
- type: recall_at_5
value: 76.533
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 53.5
- type: map_at_10
value: 63.105999999999995
- type: map_at_100
value: 63.63100000000001
- type: map_at_1000
value: 63.641999999999996
- type: map_at_3
value: 60.617
- type: map_at_5
value: 62.132
- type: mrr_at_1
value: 53.5
- type: mrr_at_10
value: 63.105999999999995
- type: mrr_at_100
value: 63.63100000000001
- type: mrr_at_1000
value: 63.641999999999996
- type: mrr_at_3
value: 60.617
- type: mrr_at_5
value: 62.132
- type: ndcg_at_1
value: 53.5
- type: ndcg_at_10
value: 67.92200000000001
- type: ndcg_at_100
value: 70.486
- type: ndcg_at_1000
value: 70.777
- type: ndcg_at_3
value: 62.853
- type: ndcg_at_5
value: 65.59899999999999
- type: precision_at_1
value: 53.5
- type: precision_at_10
value: 8.309999999999999
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.1
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 53.5
- type: recall_at_10
value: 83.1
- type: recall_at_100
value: 95.1
- type: recall_at_1000
value: 97.39999999999999
- type: recall_at_3
value: 69.3
- type: recall_at_5
value: 76.0
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 51.773759138130046
- type: f1
value: 40.38600802756481
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 88.48030018761726
- type: ap
value: 59.2732541555627
- type: f1
value: 83.58836007358619
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 73.67511194245922
- type: cos_sim_spearman
value: 79.43347759067298
- type: euclidean_pearson
value: 79.04491504318766
- type: euclidean_spearman
value: 79.14478545356785
- type: manhattan_pearson
value: 79.03365022867428
- type: manhattan_spearman
value: 79.13172717619908
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 67.184
- type: map_at_10
value: 76.24600000000001
- type: map_at_100
value: 76.563
- type: map_at_1000
value: 76.575
- type: map_at_3
value: 74.522
- type: map_at_5
value: 75.598
- type: mrr_at_1
value: 69.47
- type: mrr_at_10
value: 76.8
- type: mrr_at_100
value: 77.082
- type: mrr_at_1000
value: 77.093
- type: mrr_at_3
value: 75.29400000000001
- type: mrr_at_5
value: 76.24
- type: ndcg_at_1
value: 69.47
- type: ndcg_at_10
value: 79.81099999999999
- type: ndcg_at_100
value: 81.187
- type: ndcg_at_1000
value: 81.492
- type: ndcg_at_3
value: 76.536
- type: ndcg_at_5
value: 78.367
- type: precision_at_1
value: 69.47
- type: precision_at_10
value: 9.599
- type: precision_at_100
value: 1.026
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 28.777
- type: precision_at_5
value: 18.232
- type: recall_at_1
value: 67.184
- type: recall_at_10
value: 90.211
- type: recall_at_100
value: 96.322
- type: recall_at_1000
value: 98.699
- type: recall_at_3
value: 81.556
- type: recall_at_5
value: 85.931
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.96032279757901
- type: f1
value: 73.48052314033545
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 84.64357767316744
- type: f1
value: 83.58250539497922
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 56.00000000000001
- type: map_at_10
value: 62.066
- type: map_at_100
value: 62.553000000000004
- type: map_at_1000
value: 62.598
- type: map_at_3
value: 60.4
- type: map_at_5
value: 61.370000000000005
- type: mrr_at_1
value: 56.2
- type: mrr_at_10
value: 62.166
- type: mrr_at_100
value: 62.653000000000006
- type: mrr_at_1000
value: 62.699000000000005
- type: mrr_at_3
value: 60.5
- type: mrr_at_5
value: 61.47
- type: ndcg_at_1
value: 56.00000000000001
- type: ndcg_at_10
value: 65.199
- type: ndcg_at_100
value: 67.79899999999999
- type: ndcg_at_1000
value: 69.056
- type: ndcg_at_3
value: 61.814
- type: ndcg_at_5
value: 63.553000000000004
- type: precision_at_1
value: 56.00000000000001
- type: precision_at_10
value: 7.51
- type: precision_at_100
value: 0.878
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 21.967
- type: precision_at_5
value: 14.02
- type: recall_at_1
value: 56.00000000000001
- type: recall_at_10
value: 75.1
- type: recall_at_100
value: 87.8
- type: recall_at_1000
value: 97.7
- type: recall_at_3
value: 65.9
- type: recall_at_5
value: 70.1
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 32.74158258279793
- type: mrr
value: 31.56071428571428
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 78.96666666666667
- type: f1
value: 78.82528563818045
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 83.54087709799674
- type: cos_sim_ap
value: 87.26170197077586
- type: cos_sim_f1
value: 84.7609561752988
- type: cos_sim_precision
value: 80.20735155513667
- type: cos_sim_recall
value: 89.86272439281943
- type: dot_accuracy
value: 72.22523010286952
- type: dot_ap
value: 79.51975358187732
- type: dot_f1
value: 76.32183908045977
- type: dot_precision
value: 67.58957654723126
- type: dot_recall
value: 87.64519535374869
- type: euclidean_accuracy
value: 82.0249052517596
- type: euclidean_ap
value: 85.32829948726406
- type: euclidean_f1
value: 83.24924318869829
- type: euclidean_precision
value: 79.71014492753623
- type: euclidean_recall
value: 87.11721224920802
- type: manhattan_accuracy
value: 82.13318895506227
- type: manhattan_ap
value: 85.28856869288006
- type: manhattan_f1
value: 83.34946757018393
- type: manhattan_precision
value: 76.94369973190348
- type: manhattan_recall
value: 90.91869060190075
- type: max_accuracy
value: 83.54087709799674
- type: max_ap
value: 87.26170197077586
- type: max_f1
value: 84.7609561752988
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 94.56
- type: ap
value: 92.80848436710805
- type: f1
value: 94.54951966576111
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 39.0866558287863
- type: cos_sim_spearman
value: 45.9211126233312
- type: euclidean_pearson
value: 44.86568743222145
- type: euclidean_spearman
value: 45.63882757207507
- type: manhattan_pearson
value: 44.89480036909126
- type: manhattan_spearman
value: 45.65929449046206
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 43.04701793979569
- type: cos_sim_spearman
value: 44.87491033760315
- type: euclidean_pearson
value: 36.2004061032567
- type: euclidean_spearman
value: 41.44823909683865
- type: manhattan_pearson
value: 36.136113427955095
- type: manhattan_spearman
value: 41.39225495993949
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 61.65611315777857
- type: cos_sim_spearman
value: 64.4067673105648
- type: euclidean_pearson
value: 61.814977248797184
- type: euclidean_spearman
value: 63.99473350700169
- type: manhattan_pearson
value: 61.684304629588624
- type: manhattan_spearman
value: 63.97831213239316
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 76.57324933064379
- type: cos_sim_spearman
value: 79.23602286949782
- type: euclidean_pearson
value: 80.28226284310948
- type: euclidean_spearman
value: 80.32210477608423
- type: manhattan_pearson
value: 80.27262188617811
- type: manhattan_spearman
value: 80.31619185039723
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 67.05266891356277
- type: mrr
value: 77.1906333623497
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 28.212
- type: map_at_10
value: 78.932
- type: map_at_100
value: 82.51899999999999
- type: map_at_1000
value: 82.575
- type: map_at_3
value: 55.614
- type: map_at_5
value: 68.304
- type: mrr_at_1
value: 91.211
- type: mrr_at_10
value: 93.589
- type: mrr_at_100
value: 93.659
- type: mrr_at_1000
value: 93.662
- type: mrr_at_3
value: 93.218
- type: mrr_at_5
value: 93.453
- type: ndcg_at_1
value: 91.211
- type: ndcg_at_10
value: 86.24000000000001
- type: ndcg_at_100
value: 89.614
- type: ndcg_at_1000
value: 90.14
- type: ndcg_at_3
value: 87.589
- type: ndcg_at_5
value: 86.265
- type: precision_at_1
value: 91.211
- type: precision_at_10
value: 42.626
- type: precision_at_100
value: 5.043
- type: precision_at_1000
value: 0.517
- type: precision_at_3
value: 76.42
- type: precision_at_5
value: 64.045
- type: recall_at_1
value: 28.212
- type: recall_at_10
value: 85.223
- type: recall_at_100
value: 96.229
- type: recall_at_1000
value: 98.849
- type: recall_at_3
value: 57.30800000000001
- type: recall_at_5
value: 71.661
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 54.385000000000005
- type: f1
value: 52.38762400903556
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 74.55283855942916
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 68.55115316700493
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 58.8
- type: map_at_10
value: 69.035
- type: map_at_100
value: 69.52000000000001
- type: map_at_1000
value: 69.529
- type: map_at_3
value: 67.417
- type: map_at_5
value: 68.407
- type: mrr_at_1
value: 58.8
- type: mrr_at_10
value: 69.035
- type: mrr_at_100
value: 69.52000000000001
- type: mrr_at_1000
value: 69.529
- type: mrr_at_3
value: 67.417
- type: mrr_at_5
value: 68.407
- type: ndcg_at_1
value: 58.8
- type: ndcg_at_10
value: 73.395
- type: ndcg_at_100
value: 75.62
- type: ndcg_at_1000
value: 75.90299999999999
- type: ndcg_at_3
value: 70.11800000000001
- type: ndcg_at_5
value: 71.87400000000001
- type: precision_at_1
value: 58.8
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 25.967000000000002
- type: precision_at_5
value: 16.42
- type: recall_at_1
value: 58.8
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.89999999999999
- type: recall_at_1000
value: 99.2
- type: recall_at_3
value: 77.9
- type: recall_at_5
value: 82.1
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.42
- type: ap
value: 75.35978503182068
- type: f1
value: 88.01006394348263
---
## Yinka
Yinka embedding 模型是在开原模型[stella-v3.5-mrl](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d)上续训的,采用了[piccolo2](https://huggingface.co/sensenova/piccolo-large-zh-v2)提到的多任务混合损失(multi-task hybrid loss training)。同样本模型也支持了可变的向量维度。
## 使用方法
该模型的使用方法同[stella-v3.5-mrl](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d)一样, 无需任何前缀。
```python
from sentence_transformers import SentenceTransformer
from sklearn.preprocessing import normalize
model = SentenceTransformer("Classical/Yinka")
# 注意先不要normalize! 选取前n维后再normalize
vectors = model.encode(["text1", "text2"], normalize_embeddings=False)
print(vectors.shape) # shape is [2,1792]
n_dims = 768
cut_vecs = normalize(vectors[:, :n_dims])
```
## 结果
| Model Name | Model Size (GB) | Dimension | Sequence Length | Classification (9) | Clustering (4) | Pair Classification (2) | Reranking (4) | Retrieval (8) | STS (8) | Average (35) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [Yinka](https://huggingface.co/Classical/Yinka) | 1.21 | 1792 | 512 | 74.30 | 61.99 | 89.87 | 69.77 | 74.40 | 63.30 | 70.79 |
| [stella-v3.5-mrl](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) |1.21 | 1792 | 512 | 71.56 | 54.39 | 88.09 | 68.45 | 73.51 | 62.48 | 68.56 |
| [piccolo-large-zh-v2](https://huggingface.co/sensenova/piccolo-large-zh-v2) | 1.21 | 1792 | 512 | 74.59 | 62.17 | 90.24 | 70 | 74.36 | 63.5 | 70.95 |
## 训练细节
TODO
## Licence
本模型采用MIT licence. |
0xfaskety/Qwen-Qwen1.5-7B-1717059598 | 0xfaskety | 2024-05-30T09:06:41Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-30T09:00:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OwOpeepeepoopoo/TheDumpheys30acc | OwOpeepeepoopoo | 2024-05-30T09:06:12Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-30T09:05:08Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# output_throuple_acc
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* /notebooks/dippy-bittensor-subnet/clone_output_throuple1
* /notebooks/dippy-bittensor-subnet/clone_tistak_q2jAQgBpjC51Fudg
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: /notebooks/dippy-bittensor-subnet/clone_tistak_q2jAQgBpjC51Fudg
layer_range: [0, 24]
- model: /notebooks/dippy-bittensor-subnet/clone_output_throuple1
layer_range: [0, 24]
merge_method: slerp
base_model: /notebooks/dippy-bittensor-subnet/clone_tistak_q2jAQgBpjC51Fudg
parameters:
t:
- filter: self_attn
value: [0.1, 0.3, 0.5, 0.7, 0.9]
- filter: mlp
value: [0.9, 0.7, 0.5, 0.3, 0.1]
- value: 0.5
dtype: bfloat16
```
|
dro14/xlm-roberta-base-finetuned-panx-de-fr | dro14 | 2024-05-30T09:05:14Z | 139 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-30T08:55:00Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1639
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2836 | 1.0 | 715 | 0.1859 | 0.8212 |
| 0.1484 | 2.0 | 1430 | 0.1632 | 0.8487 |
| 0.0953 | 3.0 | 2145 | 0.1639 | 0.8591 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Agita/DistilBERT_test | Agita | 2024-05-30T09:01:41Z | 154 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"feature-extraction",
"dataset:takala/financial_phrasebank",
"arxiv:1910.09700",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-30T04:06:59Z | ---
library_name: transformers
license: apache-2.0
datasets:
- takala/financial_phrasebank
metrics:
- accuracy
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jrahn/llama-3-8b-claudstruct-v1 | jrahn | 2024-05-30T08:56:59Z | 4 | 0 | peft | [
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"en",
"dataset:Norquinal/claude_multi_instruct_30k",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-30T07:53:15Z | ---
license: llama3
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: llama-3-8b-claudstruct-v1
results: []
datasets:
- Norquinal/claude_multi_instruct_30k
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: llama3
datasets:
- path: Norquinal/claude_multi_instruct_30k
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./outputs/out/
adapter: qlora
lora_model_dir:
sequence_len: 512
sample_packing: false
pad_to_sequence_len: true
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 8
num_epochs: 1
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00001
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: true
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# llama-3-8b-claudstruct-v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the [Norquinal/claude_multi_instruct_30k](https://huggingface.co/datasets/Norquinal/claude_multi_instruct_30k) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2209 | 0.0007 | 1 | 2.0399 |
| 1.7856 | 0.2502 | 341 | 1.6985 |
| 1.6989 | 0.5004 | 682 | 1.6659 |
| 1.6892 | 0.7506 | 1023 | 1.6559 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1 |
WionaGlaenzer/VDJoy_70million | WionaGlaenzer | 2024-05-30T08:55:23Z | 196 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-12-15T15:31:44Z | RoBERTa Antibody language model trained on 70 million human VDJ sequences from: WionaGlaenzer/oas_70million_human
---
tags:
- biology
- antibody
--- |
ZaneHorrible/ViTL-32-384-1e4-batch_16_epoch_4_classes_24 | ZaneHorrible | 2024-05-30T08:54:21Z | 236 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-large-patch32-384",
"base_model:finetune:google/vit-large-patch32-384",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-30T05:43:38Z | ---
license: apache-2.0
base_model: google/vit-large-patch32-384
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: ViTL-32-384-1e4-batch_16_epoch_4_classes_24
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9755747126436781
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTL-32-384-1e4-batch_16_epoch_4_classes_24
This model is a fine-tuned version of [google/vit-large-patch32-384](https://huggingface.co/google/vit-large-patch32-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1157
- Accuracy: 0.9756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3336 | 0.03 | 100 | 0.2980 | 0.9325 |
| 0.0235 | 0.07 | 200 | 0.1580 | 0.9612 |
| 0.0381 | 0.1 | 300 | 0.2212 | 0.9540 |
| 0.0507 | 0.14 | 400 | 0.4664 | 0.9037 |
| 0.0052 | 0.17 | 500 | 0.1737 | 0.9670 |
| 0.0499 | 0.21 | 600 | 0.2187 | 0.9511 |
| 0.0454 | 0.24 | 700 | 0.1837 | 0.9569 |
| 0.0317 | 0.28 | 800 | 0.2616 | 0.9497 |
| 0.0594 | 0.31 | 900 | 0.1867 | 0.9555 |
| 0.0583 | 0.35 | 1000 | 0.1817 | 0.9569 |
| 0.0044 | 0.38 | 1100 | 0.2358 | 0.9497 |
| 0.0836 | 0.42 | 1200 | 0.2422 | 0.9454 |
| 0.0712 | 0.45 | 1300 | 0.1943 | 0.9555 |
| 0.0399 | 0.49 | 1400 | 0.2922 | 0.9440 |
| 0.0098 | 0.52 | 1500 | 0.3783 | 0.9325 |
| 0.0414 | 0.56 | 1600 | 0.2583 | 0.9454 |
| 0.1085 | 0.59 | 1700 | 0.2241 | 0.9511 |
| 0.0492 | 0.63 | 1800 | 0.2813 | 0.9368 |
| 0.044 | 0.66 | 1900 | 0.3361 | 0.9353 |
| 0.0344 | 0.7 | 2000 | 0.2549 | 0.9468 |
| 0.002 | 0.73 | 2100 | 0.1794 | 0.9641 |
| 0.0731 | 0.77 | 2200 | 0.2300 | 0.9540 |
| 0.0151 | 0.8 | 2300 | 0.2050 | 0.9569 |
| 0.0031 | 0.84 | 2400 | 0.2175 | 0.9454 |
| 0.1015 | 0.87 | 2500 | 0.1725 | 0.9626 |
| 0.0383 | 0.91 | 2600 | 0.2104 | 0.9540 |
| 0.0926 | 0.94 | 2700 | 0.1762 | 0.9540 |
| 0.0001 | 0.98 | 2800 | 0.1978 | 0.9612 |
| 0.1365 | 1.01 | 2900 | 0.1512 | 0.9655 |
| 0.083 | 1.04 | 3000 | 0.1298 | 0.9641 |
| 0.0002 | 1.08 | 3100 | 0.1976 | 0.9540 |
| 0.0042 | 1.11 | 3200 | 0.1719 | 0.9698 |
| 0.0002 | 1.15 | 3300 | 0.1924 | 0.9583 |
| 0.0002 | 1.18 | 3400 | 0.1732 | 0.9626 |
| 0.0978 | 1.22 | 3500 | 0.1902 | 0.9612 |
| 0.1067 | 1.25 | 3600 | 0.1868 | 0.9612 |
| 0.0005 | 1.29 | 3700 | 0.2166 | 0.9468 |
| 0.0007 | 1.32 | 3800 | 0.2293 | 0.9425 |
| 0.0001 | 1.36 | 3900 | 0.2296 | 0.9626 |
| 0.0001 | 1.39 | 4000 | 0.1685 | 0.9684 |
| 0.0001 | 1.43 | 4100 | 0.2106 | 0.9655 |
| 0.0004 | 1.46 | 4200 | 0.1614 | 0.9670 |
| 0.0 | 1.5 | 4300 | 0.1311 | 0.9727 |
| 0.0 | 1.53 | 4400 | 0.1445 | 0.9784 |
| 0.0433 | 1.57 | 4500 | 0.1544 | 0.9727 |
| 0.0263 | 1.6 | 4600 | 0.2133 | 0.9626 |
| 0.0 | 1.64 | 4700 | 0.1903 | 0.9598 |
| 0.0 | 1.67 | 4800 | 0.1587 | 0.9583 |
| 0.0 | 1.71 | 4900 | 0.1817 | 0.9655 |
| 0.1503 | 1.74 | 5000 | 0.2346 | 0.9526 |
| 0.0699 | 1.78 | 5100 | 0.1143 | 0.9713 |
| 0.0004 | 1.81 | 5200 | 0.1937 | 0.9626 |
| 0.0001 | 1.85 | 5300 | 0.2660 | 0.9540 |
| 0.2208 | 1.88 | 5400 | 0.1500 | 0.9713 |
| 0.0494 | 1.92 | 5500 | 0.1203 | 0.9698 |
| 0.0001 | 1.95 | 5600 | 0.1231 | 0.9756 |
| 0.0001 | 1.99 | 5700 | 0.1254 | 0.9698 |
| 0.0 | 2.02 | 5800 | 0.1622 | 0.9684 |
| 0.0001 | 2.06 | 5900 | 0.1464 | 0.9698 |
| 0.0 | 2.09 | 6000 | 0.1420 | 0.9698 |
| 0.0 | 2.12 | 6100 | 0.1416 | 0.9698 |
| 0.0 | 2.16 | 6200 | 0.1408 | 0.9698 |
| 0.0001 | 2.19 | 6300 | 0.1402 | 0.9698 |
| 0.0147 | 2.23 | 6400 | 0.1536 | 0.9655 |
| 0.0 | 2.26 | 6500 | 0.1944 | 0.9612 |
| 0.0 | 2.3 | 6600 | 0.1724 | 0.9684 |
| 0.0003 | 2.33 | 6700 | 0.1910 | 0.9612 |
| 0.0003 | 2.37 | 6800 | 0.1995 | 0.9626 |
| 0.0004 | 2.4 | 6900 | 0.1563 | 0.9655 |
| 0.0 | 2.44 | 7000 | 0.1460 | 0.9727 |
| 0.0 | 2.47 | 7100 | 0.1434 | 0.9727 |
| 0.0 | 2.51 | 7200 | 0.1242 | 0.9741 |
| 0.0041 | 2.54 | 7300 | 0.1364 | 0.9713 |
| 0.0 | 2.58 | 7400 | 0.1396 | 0.9684 |
| 0.0 | 2.61 | 7500 | 0.1371 | 0.9655 |
| 0.0 | 2.65 | 7600 | 0.1373 | 0.9684 |
| 0.0 | 2.68 | 7700 | 0.1230 | 0.9698 |
| 0.0 | 2.72 | 7800 | 0.1225 | 0.9698 |
| 0.0 | 2.75 | 7900 | 0.1223 | 0.9698 |
| 0.0001 | 2.79 | 8000 | 0.1218 | 0.9698 |
| 0.0 | 2.82 | 8100 | 0.1186 | 0.9756 |
| 0.0 | 2.86 | 8200 | 0.1183 | 0.9756 |
| 0.0 | 2.89 | 8300 | 0.1167 | 0.9756 |
| 0.0 | 2.93 | 8400 | 0.1163 | 0.9756 |
| 0.0 | 2.96 | 8500 | 0.1162 | 0.9756 |
| 0.0 | 3.0 | 8600 | 0.1157 | 0.9756 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
WionaGlaenzer/AntiBERTa_ethz | WionaGlaenzer | 2024-05-30T08:53:33Z | 183 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-09-26T13:33:22Z | RoBERTa Antibody language model |
GeorgeDaDude/jb_sytem_bin_judge_base | GeorgeDaDude | 2024-05-30T08:49:13Z | 186 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-23T06:42:44Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: jb_sytem_bin_judge_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jb_sytem_bin_judge_base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3564
- Accuracy: 0.9157
- Recall: 0.9147
- Precision: 0.8845
- F1: 0.8994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3409 | 1.0 | 1708 | 0.3559 | 0.8946 | 0.9254 | 0.8362 | 0.8785 |
| 0.3666 | 2.0 | 3416 | 0.3907 | 0.8973 | 0.8188 | 0.9231 | 0.8678 |
| 0.25 | 3.0 | 5124 | 0.3385 | 0.9148 | 0.8977 | 0.8957 | 0.8967 |
| 0.0546 | 4.0 | 6832 | 0.3714 | 0.9087 | 0.9147 | 0.8702 | 0.8919 |
| 0.363 | 5.0 | 8540 | 0.3564 | 0.9157 | 0.9147 | 0.8845 | 0.8994 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
cetusian/distilbert-furniture-names | cetusian | 2024-05-30T08:48:43Z | 64 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-30T08:22:53Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: cetusian/distilbert-furniture-names
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cetusian/distilbert-furniture-names
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2307
- Validation Loss: 0.2533
- Train Precision: 0.0
- Train Recall: 0.0
- Train F1: 0.0
- Train Accuracy: 0.9466
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 27, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.2398 | 0.2569 | 0.0 | 0.0 | 0.0 | 0.9466 | 0 |
| 0.2284 | 0.2533 | 0.0 | 0.0 | 0.0 | 0.9466 | 1 |
| 0.2307 | 0.2533 | 0.0 | 0.0 | 0.0 | 0.9466 | 2 |
### Framework versions
- Transformers 4.41.1
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
redav/model-1 | redav | 2024-05-30T08:41:38Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T08:41:36Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** redav
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Netta1994/setfit_baai_600 | Netta1994 | 2024-05-30T08:41:22Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-05-30T08:40:39Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Netta1994/setfit_baai_600
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_600")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-3_75bpw_exl2 | Zoyd | 2024-05-30T08:40:16Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-29T13:03:48Z | ---
library_name: transformers
license: llama3
---
**Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-2_5bpw_exl2)**</center> | <center>23198 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-3_0bpw_exl2)**</center> | <center>27278 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-3_5bpw_exl2)**</center> | <center>31361 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-3_75bpw_exl2)**</center> | <center>33398 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-4_0bpw_exl2)**</center> | <center>35427 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-4_25bpw_exl2)**</center> | <center>37476 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-5_0bpw_exl2)**</center> | <center>43565 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-6_0bpw_exl2)**</center> | <center>51837 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-6_5bpw_exl2)**</center> | <center>56044 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/failspy_Meta-Llama-3-70B-Instruct-abliterated-v3.5-8_0bpw_exl2)**</center> | <center>63001 MB</center> | <center>8</center> |
# Llama-3-70B-Instruct-abliterated-v3.5 Model Card
[My original Jupyter "cookbook" to replicate the methodology can be found here](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb)
[My personal library o' code used](https://github.com/FailSpy/abliterator) (WIP, looking to improve and generalize)
This is [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.
## V3.5?
Second try. I felt that the V3 methodology of 70B wasn't well applied, and u/Nexesenex on reddit kinda confirmed my suspicions. So go blame them. :P
This one has only a single layer modified(!) and that seems to have completely eliminated moralizing disclaimers.
I hope you'll find this model better than 70B-V3! As well, this also fixes the tokenizer.
## Hang on, "abliteration"? Orthogonalization? Ablation? What is this?
TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out.
**TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.**
As far as "abliteration": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes.
Ablate + obliterated = Abliterated
Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization.
## A little more on the methodology, and why this is interesting
To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt.
Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights.
> Why this over fine-tuning?
Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage.
As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.)
Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques.
It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa.
I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity.
> Okay, fine, but why V3? There's no V2 70B?
Well, I released a V2 a while back for 8B under Cognitive Computations.
It ended up being not worth it to try V2 with 70B, I wanted to refine the model before wasting compute cycles on what might not even be a better model.
I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations.
So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.)
## Quirkiness awareness notice
This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects.
If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored.
Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
|
jsfamily/korean-small_t332 | jsfamily | 2024-05-30T08:36:43Z | 88 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:korean_samll_dataset13",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-30T08:34:10Z | ---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
base_model: openai/whisper-small
datasets:
- korean_samll_dataset13
model-index:
- name: korean-small_t332
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# korean-small_t332
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the korean_samll_dataset13 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1984
- eval_cer: 8.5920
- eval_runtime: 1259.2086
- eval_samples_per_second: 3.016
- eval_steps_per_second: 0.377
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
YorkieOH10/AlchemistCoder-DS-6.7B-Q4_K_M-GGUF | YorkieOH10 | 2024-05-30T08:33:31Z | 1 | 1 | null | [
"gguf",
"code generation",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-30T08:33:20Z | ---
license: apache-2.0
tags:
- code generation
- llama-cpp
- gguf-my-repo
---
# YorkieOH10/AlchemistCoder-DS-6.7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`internlm/AlchemistCoder-DS-6.7B`](https://huggingface.co/internlm/AlchemistCoder-DS-6.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/internlm/AlchemistCoder-DS-6.7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo YorkieOH10/AlchemistCoder-DS-6.7B-Q4_K_M-GGUF --model alchemistcoder-ds-6.7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo YorkieOH10/AlchemistCoder-DS-6.7B-Q4_K_M-GGUF --model alchemistcoder-ds-6.7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m alchemistcoder-ds-6.7b-q4_k_m.gguf -n 128
```
|
YorkieOH10/AlchemistCoder-DS-6.7B-Q8_0-GGUF | YorkieOH10 | 2024-05-30T08:30:50Z | 2 | 1 | null | [
"gguf",
"code generation",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-30T08:30:28Z | ---
license: apache-2.0
tags:
- code generation
- llama-cpp
- gguf-my-repo
---
# YorkieOH10/AlchemistCoder-DS-6.7B-Q8_0-GGUF
This model was converted to GGUF format from [`internlm/AlchemistCoder-DS-6.7B`](https://huggingface.co/internlm/AlchemistCoder-DS-6.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/internlm/AlchemistCoder-DS-6.7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo YorkieOH10/AlchemistCoder-DS-6.7B-Q8_0-GGUF --model alchemistcoder-ds-6.7b-q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo YorkieOH10/AlchemistCoder-DS-6.7B-Q8_0-GGUF --model alchemistcoder-ds-6.7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m alchemistcoder-ds-6.7b-q8_0.gguf -n 128
```
|
RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf | RichardErkhov | 2024-05-30T08:26:37Z | 40 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T05:31:51Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CollectiveCognition-v1.1-Nebula-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/CollectiveCognition-v1.1-Nebula-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CollectiveCognition-v1.1-Nebula-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [CollectiveCognition-v1.1-Nebula-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [CollectiveCognition-v1.1-Nebula-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [CollectiveCognition-v1.1-Nebula-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [CollectiveCognition-v1.1-Nebula-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [CollectiveCognition-v1.1-Nebula-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [CollectiveCognition-v1.1-Nebula-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-nc-4.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# OpenOrca-Nebula-7B
OpenOrca-Nebula-7B is a merge of [teknium/CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) and [PulsarAI/Nebula-7B](https://huggingface.co/Weyaxi/PulsarAI/Nebula-7B)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__CollectiveCognition-v1.1-Nebula-7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 53.79 |
| ARC (25-shot) | 58.11 |
| HellaSwag (10-shot) | 82.39 |
| MMLU (5-shot) | 57.03 |
| TruthfulQA (0-shot) | 53.53 |
| Winogrande (5-shot) | 73.72 |
| GSM8K (5-shot) | 9.55 |
| DROP (3-shot) | 42.17 |
|
Subsets and Splits