modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
BTX24/vit_birads_classifier_23 | BTX24 | 2024-07-01T20:46:14Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T20:46:13Z | Entry not found |
PSM272/Nova-14B-slerp | PSM272 | 2024-07-01T20:48:37Z | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"SillyTilly/google-gemma-2-9b-it",
"THUDM/glm-4-9b-chat",
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T20:48:37Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- SillyTilly/google-gemma-2-9b-it
- THUDM/glm-4-9b-chat
---
# Nova-14B-slerp
Nova-14B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [SillyTilly/google-gemma-2-9b-it](https://huggingface.co/SillyTilly/google-gemma-2-9b-it)
* [THUDM/glm-4-9b-chat](https://huggingface.co/THUDM/glm-4-9b-chat)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SillyTilly/google-gemma-2-9b-it
layer_range: [0, 32]
- model: THUDM/glm-4-9b-chat
layer_range: [0, 32]
merge_method: slerp
base_model: SillyTilly/google-gemma-2-9b-it
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Edwinhr716/Trendyol-LLM-7b-chat-v0.1-fork | Edwinhr716 | 2024-07-01T20:49:06Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T20:49:06Z | Entry not found |
paupb/llama-3-8b-sft-sensationalist-detector | paupb | 2024-07-01T21:08:25Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T20:52:00Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Sensationalist Detector in Spanish
We created a dataset of news articles in Spanish, classified as Sensationalist and Non-Sensationalist from the science and technology section. The dataset consists of real news articles and synthetically generated news articles.
We used this dataset to perform instruction tuning of LLaMA3-8B.
This is the final project for NLP class at Exactas UBA.
The training script and dataset used are available in <https://github.com/EchuCompa/Sensationalism-Detector-In-Spanish>.
# Uploaded model
- **Developed by:** paupb, EchuCompa, tomipalazzo, simonlew
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TTTXXX01/simpo-exps | TTTXXX01 | 2024-07-01T20:52:50Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T20:52:50Z | Entry not found |
Osamakhateeb/gasmUnicorn | Osamakhateeb | 2024-07-01T20:54:25Z | 0 | 0 | null | [
"license:pddl",
"region:us"
] | null | 2024-07-01T20:53:17Z | ---
license: pddl
---
|
AdamKasumovic/llama3-70b-instruct-ids-winogrande-train-s-zu-winogrande-med | AdamKasumovic | 2024-07-01T20:55:14Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T20:55:14Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chaley22/paligemma-lm-coco35 | chaley22 | 2024-07-02T16:51:34Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2024-07-01T20:55:20Z | Entry not found |
Osamakhateeb/realisticVersion | Osamakhateeb | 2024-07-01T20:56:54Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-01T20:55:23Z | ---
license: mit
---
|
valerielucro/mistral_gsm8k_sft_v2_epoch2 | valerielucro | 2024-07-01T20:57:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T20:56:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
itay-nakash/model_387dff9370_sweep_snowy-sea-1170 | itay-nakash | 2024-07-01T20:57:04Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T20:57:04Z | Entry not found |
oleshy/ontochem_biobert_cross_valid_4 | oleshy | 2024-07-01T21:06:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-01T20:57:17Z | ---
tags:
- generated_from_trainer
model-index:
- name: ontochem_biobert_cross_valid_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ontochem_biobert_cross_valid_4
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 0.0443 |
| No log | 2.0 | 10 | 0.0443 |
| No log | 3.0 | 15 | 0.0442 |
| No log | 4.0 | 20 | 0.0441 |
| No log | 5.0 | 25 | 0.0439 |
| No log | 6.0 | 30 | 0.0437 |
| No log | 7.0 | 35 | 0.0435 |
| No log | 8.0 | 40 | 0.0433 |
| No log | 9.0 | 45 | 0.0430 |
| No log | 10.0 | 50 | 0.0427 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
habulaj/288636256954 | habulaj | 2024-07-01T20:59:31Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T20:59:20Z | Entry not found |
Hamze-Hammami/q-FrozenLake-v1-4x4-noSlippery | Hamze-Hammami | 2024-07-01T20:59:32Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-07-01T20:59:31Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="Hamze-Hammami/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
jeff3six9/gpt2 | jeff3six9 | 2024-07-01T21:01:03Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:01:03Z | Entry not found |
AdamKasumovic/llama3-70b-instruct-ids-winogrande-train-s-zu-winogrande-low | AdamKasumovic | 2024-07-01T21:01:52Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:01:51Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Piotrasz/Llama-2-7b-hf-ROME-50-en | Piotrasz | 2024-07-01T21:05:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T21:02:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dominic1021/newestmodelspony | dominic1021 | 2024-07-01T21:06:11Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:03:07Z | Entry not found |
Vidulaae/vidula-finetune-llama | Vidulaae | 2024-07-01T21:37:12Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T21:04:50Z | Entry not found |
daedalus16/bart-M3c-strat_eff | daedalus16 | 2024-07-01T21:05:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-01T21:04:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
megagarra/lorena | megagarra | 2024-07-02T00:16:17Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:04:53Z | Entry not found |
AdamKasumovic/llama3-70b-instruct-ids-winogrande-train-s-xh-winogrande-high | AdamKasumovic | 2024-07-01T21:06:34Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:06:33Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JMK001/FT-DocVQA | JMK001 | 2024-07-01T21:08:10Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:08:10Z | Entry not found |
AdamKasumovic/llama3-70b-instruct-ids-winogrande-train-s-xh-winogrande-med | AdamKasumovic | 2024-07-01T22:41:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T21:09:27Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YuvrajSingh9886/phi3-mini-fine-tuned-agricultural-common-plant-diseases-QnA | YuvrajSingh9886 | 2024-07-01T21:12:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:10:40Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** YuvrajSingh9886
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
misturah/yoruba_picture | misturah | 2024-07-01T21:12:22Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:12:22Z | Entry not found |
AdamKasumovic/llama3-70b-instruct-ids-winogrande-train-s-xh-winogrande-low | AdamKasumovic | 2024-07-01T21:37:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T21:14:38Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AdamKasumovic/llama3-70b-instruct-stbt-winogrande-train-s-zu-winogrande-high | AdamKasumovic | 2024-07-01T21:32:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T21:14:38Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
buseskorkmaz/hackernews-bc-gpt2 | buseskorkmaz | 2024-07-01T21:16:51Z | 0 | 0 | transformers | [
"transformers",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:14:42Z | ---
license: cc-by-nc-nd-4.0
---
|
itay-nakash/model_387dff9370_sweep_spring-surf-1171 | itay-nakash | 2024-07-01T21:14:46Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:14:46Z | Entry not found |
viihzin/angeliqueboyer | viihzin | 2024-07-01T21:19:08Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-01T21:18:36Z | ---
license: openrail
---
|
KataSuriplanta/arbol | KataSuriplanta | 2024-07-01T21:18:54Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:18:54Z | Entry not found |
abhayesian/LLama3_HarmBench_NoAttack_2 | abhayesian | 2024-07-01T21:19:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:19:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
encoderdecoder083/paligemma-224-customdata-1 | encoderdecoder083 | 2024-07-01T23:05:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:19:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
viihzin/poncho | viihzin | 2024-07-01T21:20:16Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-01T21:19:56Z | ---
license: openrail
---
|
ZeroWw/gemma-2-9b-SBS | ZeroWw | 2024-07-01T21:36:07Z | 0 | 0 | null | [
"en",
"license:mit",
"region:us"
] | null | 2024-07-01T21:22:02Z | ---
license: mit
language:
- en
---
8 bit quantized gemma 2 9b
gemma.cpp and gemma 2 as released.
|
eurecom-ds/edm-ema-shapes3d-64 | eurecom-ds | 2024-07-02T07:03:34Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-07-01T21:22:23Z | Entry not found |
habulaj/48305718 | habulaj | 2024-07-01T21:23:29Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:23:16Z | Entry not found |
HassanSM/tuned_dataset_model | HassanSM | 2024-07-01T21:23:51Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:23:51Z | Entry not found |
BTX24/vit-base-patch16-224-in21k-finetuned-birads-23 | BTX24 | 2024-07-01T21:24:39Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:24:39Z | Entry not found |
gruhit-patel/poca-SoccerTwos | gruhit-patel | 2024-07-01T21:24:48Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-07-01T21:24:45Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gruhit-patel/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Avshkol/code-llama-7b-text-to-sql | Avshkol | 2024-07-01T21:25:10Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:25:10Z | Entry not found |
AdamKasumovic/llama3-70b-instruct-stbt-winogrande-train-s-zu-winogrande-med | AdamKasumovic | 2024-07-01T21:26:10Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:26:09Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AdamKasumovic/llama3-70b-instruct-stbt-winogrande-train-s-zu-winogrande-low | AdamKasumovic | 2024-07-01T22:15:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T21:28:26Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DavidHuggingFace/Heimer-dpo-TinyLlama-1.1B-GGUF | DavidHuggingFace | 2024-07-01T21:28:55Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:28:55Z | Entry not found |
JuliusFx/dyu-fr-t5-small_v2 | JuliusFx | 2024-07-02T19:50:35Z | 0 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-07-01T21:30:28Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: dyu-fr-t5-small_v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dyu-fr-t5-small_v2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.1106
- Validation Loss: 3.0976
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.6296 | 3.4205 | 0 |
| 3.5353 | 3.3749 | 1 |
| 3.4514 | 3.3120 | 2 |
| 3.3849 | 3.2651 | 3 |
| 3.3251 | 3.2209 | 4 |
| 3.2797 | 3.1856 | 5 |
| 3.2260 | 3.1575 | 6 |
| 3.1839 | 3.1354 | 7 |
| 3.1493 | 3.1079 | 8 |
| 3.1106 | 3.0976 | 9 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
FreeINT/uptime | FreeINT | 2024-07-01T21:30:39Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:30:38Z | Entry not found |
AdamKasumovic/llama3-70b-instruct-stbt-winogrande-train-s-xh-winogrande-high | AdamKasumovic | 2024-07-01T21:32:15Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:32:09Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mqliu/mantis-8b-idefics2_2048 | mqliu | 2024-07-01T21:32:30Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:32:30Z | Entry not found |
Loren85/TTD.gg-Nuovo-Modello | Loren85 | 2024-07-01T21:33:56Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-01T21:32:34Z | ---
license: openrail
---
|
KicksOnFire/hello-world-model | KicksOnFire | 2024-07-02T00:21:42Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-01T21:36:11Z | ---
license: mit
---
|
Hamze-Hammami/Taxi-v3 | Hamze-Hammami | 2024-07-01T21:39:33Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-07-01T21:36:36Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="Hamze-Hammami/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
valerielucro/mistral_gsm8k_sft_v1_epoch3 | valerielucro | 2024-07-01T21:38:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:37:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shenbinqian/whisper-small-hi | shenbinqian | 2024-07-01T21:40:13Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:40:13Z | Entry not found |
sruly/phi-search-2 | sruly | 2024-07-01T21:43:01Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"dataset:sruly/StepBackSearch-ds-phi-edition",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T21:40:51Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: microsoft/Phi-3-mini-4k-instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- sruly/StepBackSearch-ds-phi-edition
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
AdamKasumovic/llama3-70b-instruct-stbt-winogrande-train-s-xh-winogrande-low | AdamKasumovic | 2024-07-01T21:41:24Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:41:24Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AdamKasumovic/llama3-70b-instruct-stbt-winogrande-train-s-xh-winogrande-med | AdamKasumovic | 2024-07-01T21:42:41Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:42:40Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DimensionSTP/EEVE-Korean-Instruct-10.8B-v1.0-scientificQA | DimensionSTP | 2024-07-01T21:58:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:yanolja/EEVE-Korean-10.8B-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T21:45:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: yanolja/EEVE-Korean-10.8B-v1.0
model-index:
- name: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
results: []
---
## Model Details
**This model is fine-tuned by yanolja/EEVE-Korean-Instruct-10.8B-v1.0**
**Fine-tuning dataset: Scientific QA dataset**
|
kovivenkatakeerthi/t5-small-finetuned-xsum | kovivenkatakeerthi | 2024-07-01T21:45:44Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:45:43Z | Entry not found |
Maxclon/ComfyUI_Models | Maxclon | 2024-07-01T22:07:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T21:46:38Z | ---
license: apache-2.0
---
|
DimensionSTP/Llama-3-Alpha-Ko-8B-Instruct-scientificQA | DimensionSTP | 2024-07-02T10:14:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T21:49:40Z | ---
license: other
license_name: llama3
language:
- ko
---
## Model Details
**This model is fine-tuned by allganize/Llama-3-Alpha-Ko-8B-Instruct**
**Fine-tuning dataset: Scientific QA dataset**
|
habulaj/5968945084 | habulaj | 2024-07-01T21:51:28Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:51:24Z | Entry not found |
DavidHuggingFace/Heimer-dpo-TinyLlama-1.1B-AWQ | DavidHuggingFace | 2024-07-01T21:53:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-07-01T21:52:57Z | Entry not found |
alexzarate/steve_jobs_young | alexzarate | 2024-07-02T01:44:35Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:54:11Z | Entry not found |
habulaj/8638063295 | habulaj | 2024-07-01T21:54:28Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:54:24Z | Entry not found |
mqliu/mantis-8b-idefics2_1024 | mqliu | 2024-07-01T22:26:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"idefics2",
"pretraining",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T21:55:11Z | ---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: mantis-8b-idefics2_1024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>]()
# mantis-8b-idefics2_1024
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
habulaj/276090246237 | habulaj | 2024-07-01T21:57:16Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:57:14Z | Entry not found |
anirvankrishna/ger_eng_translation | anirvankrishna | 2024-07-01T21:59:58Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T21:57:45Z | Entry not found |
YuvrajSingh9886/phi3-mini-fine-tuned-agricultural-soil-QnA | YuvrajSingh9886 | 2024-07-01T22:02:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T22:00:47Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** YuvrajSingh9886
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AIRRC/Adam2 | AIRRC | 2024-07-01T22:57:31Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:11:42Z | ---
license: apache-2.0
library_name: adapter-transformers
tags:
- not-for-all-audiences
---Hello this is for anyone reading this is one of our many models feel free to check it out see what new, or features we update daily |
gogosig/adobe5k-lora | gogosig | 2024-07-01T22:13:51Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:13:51Z | Entry not found |
habulaj/197727435659 | habulaj | 2024-07-01T22:15:44Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:15:37Z | Entry not found |
ly826la/my_awesome_food_model | ly826la | 2024-07-01T22:16:28Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:16:28Z | Entry not found |
ThenarEminence/ponyCAD | ThenarEminence | 2024-07-01T22:17:09Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:17:09Z | Entry not found |
Ramikan-BR/Codama-8b-LORA-v0 | Ramikan-BR | 2024-07-01T22:23:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T22:17:47Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
infiniteglass1/zephyr-7b-sft-qlora | infiniteglass1 | 2024-07-01T22:18:40Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:18:40Z | Entry not found |
benzids/tinkoff | benzids | 2024-07-01T22:38:16Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-01T22:18:52Z | ---
license: openrail
---
|
cris177/Qwen2-Simple-Arguments | cris177 | 2024-07-02T19:59:50Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T22:22:10Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
model-index:
- name: Qwen2-Simple-Arguments
results:
- task:
type: text-generation
dataset:
name: Argument-parsing
type: Argument-parsing
metrics:
- name: Accuracy
type: Accuracy
value: 100
---
# Qwen2 Simple Arguments

This model aims to parse simple english arguments, arguments formed of two premises and a conclusion, including two propositions.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Cristian Desivo
- **Model type:** LLM
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** Qwen2-0.5b
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** TBD
- **Demo:** TBD
### Quantization
- **Q4_K_M.gguf** https://huggingface.co/cris177/Qwen2-Simple-Arguments/resolve/main/Qwen2_arguments.Q4_K_M.gguf?download=true
## Usage
Below we share some code snippets on how to get quickly started with running the model.
### llama.cpp server [Recommended]
The recommended way of running the model is with a llama.cpp server running the quantized https://huggingface.co/cris177/Qwen2-Simple-Arguments/resolve/main/Qwen2_arguments.Q4_K_M.gguf?download=true
Then you can use the following script to use the server's model for inference:
```python
import json
import requests
def llmCompletion(prompt, **args):
url = "http://localhost:8080/completions"
headers = {
"Content-Type": "application/json"
}
data = {
'prompt': prompt
}
for arg in args:
data[arg] = args[arg]
response = requests.post(url, headers=headers, json=data)
return response.json()
def analyze_argument(argument):
instruction = 'Based on the following argument, identify the following elements: premises, conclusion, propositions, type of argument, negation of propositions and validity.'
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:"""
prompt = alpaca_prompt.format(instruction, argument)
with open("prompt.txt", "w") as f:
f.write(prompt)
properties = {
"Premise 1": {"type": "string"},
"Premise 2": {"type": "string"},
"Conclusion": {"type": "string"},
"Proposition 1": {"type": "string"},
"Proposition 2": {"type": "string"},
"Type of argument": {"type": "string"},
"Negation of Proposition 1": {"type": "string"},
"Negation of Proposition 2": {"type": "string"},
"Validity": {"type": "boolean"},
}
analysis = llmCompletion(prompt,
max_tokens=1000,
temperature=0,
json_schema={
"type": "object",
"properties": properties,
"required": list(properties.keys()),
},
)
return analysis['content']
argument = "If it's wednesday it's cold, and it's cold, therefore it's wednesday."
output = analyze_argument("If it's wednesday it's cold, and it's cold, therefore it's wednesday.")
print(output)
```
Output:
```
{"Premise 1": "If it's wednesday it's cold",
"Premise 2": "It's cold",
"Conclusion": "It is Wednesday",
"Proposition 1": "It is Wednesday",
"Proposition 2": "It is cold",
"Type of argument": "affirming the consequent",
"Negation of Proposition 1": "It is not Wednesday",
"Negation of Proposition 2": "It is not cold",
"Validity": true}
{'Premise 1': "If it's wednesday it's cold", 'Premise 2': "It's cold", 'Conclusion': 'It is Wednesday', 'Proposition 1': 'It is Wednesday', 'Proposition 2': 'It is cold', 'Type of argument': 'affirming the consequent', 'Negation of Proposition 1': 'It is not Wednesday', 'Negation of Proposition 2': 'It is not cold', 'Validity': True}
```
### transformers 🤗
First make sure to pip install -U transformers, then use the code below replacing the `argument` variable for the argument you want to parse:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cris177/Qwen2-Simple-Arguments",
device_map="auto",)
tokenizer = AutoTokenizer.from_pretrained("cris177/Qwen2-Simple-Arguments")
argument = "If it's wednesday it's cold, and it's cold, therefore it's wednesday."
instruction = 'Based on the following argument, identify the following elements: premises, conclusion, propositions, type of argument, negation of propositions and validity.'
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:"""
prompt = alpaca_prompt.format(instruction, argument)
input_ids = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_length=1000, num_return_sequences=1)
print(tokenizer.decode(outputs[0]))
```
Output:
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Based on the following argument, identify the following elements: premises, conclusion, propositions, type of argument, negation of propositions and validity.
### Input:
If it's wednesday it's cold, and it's cold, therefore it's wednesday.
### Response:
{"Premise 1": "If it's wednesday it's cold",
"Premise 2": "It's cold",
"Conclusion": "It is Wednesday",
"Proposition 1": "It is Wednesday",
"Proposition 2": "It is cold",
"Type of argument": "affirming the consequent",
"Negation of Proposition 1": "It is not Wednesday",
"Negation of Proposition 2": "It is not cold",
"Validity": "false"}<|endoftext|>
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was trained on syntethic data, based on the following types of arguments:
- Modus Ponen
- Modus Tollen
- Affirming Consequent
- Disjunctive Syllogism
- Denying Antecedent
- Invalid Conditional Syllogism
Each argument was constructed by selecting two random propositions (from a list of 400 propositions that was generated beforehand), choosing a type of argument and combining it all with randomly selected connectors (therefore, since, hence, thus, etc).
50k arguments were created to train the model, and 100 to test.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing
[More Information Needed]
We converted the data to the Alpaca chat format before feeding it to the model.
#### Training
We used unsloth for memory reduced sped up training.
We trained for one epoch.
Less than 2.5 GB of VRAM were used for training, and it took 2.5 hours.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
The model obtains 100% train and test accuracy on our synthetic dataset. |
habulaj/1259712781 | habulaj | 2024-07-01T22:23:36Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:23:30Z | Entry not found |
habulaj/134579123111 | habulaj | 2024-07-01T22:24:07Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:24:03Z | Entry not found |
habulaj/419631386395 | habulaj | 2024-07-01T22:25:30Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:25:22Z | Entry not found |
blockblockblock/NuExtract-bpw5.5-exl2 | blockblockblock | 2024-07-01T22:30:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-07-01T22:28:04Z | ---
license: mit
language:
- en
---
# Structure Extraction Model by NuMind 🔥
NuExtract is a version of [phi-3-mini](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct), fine-tuned on a private high-quality synthetic dataset for information extraction.
To use the model, provide an input text (less than 2000 tokens) and a JSON template describing the information you need to extract.
Note: This model is purely extractive, so all text output by the model is present as is in the original text. You can also provide an example of output formatting to help the model understand your task more precisely.
Try it here: https://huggingface.co/spaces/numind/NuExtract
We also provide a tiny(0.5B) and large(7B) version of this model: [NuExtract-tiny](https://huggingface.co/numind/NuExtract-tiny) and [NuExtract-large](https://huggingface.co/numind/NuExtract-large)
**Checkout other models by NuMind:**
* SOTA Zero-shot NER Model [NuNER Zero](https://huggingface.co/numind/NuNER_Zero)
* SOTA Multilingual Entity Recognition Foundation Model: [link](https://huggingface.co/numind/entity-recognition-multilingual-general-sota-v1)
* SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1)
## Benchmark
Benchmark 0 shot (will release soon):
<p align="left">
<img src="result.png" width="600">
</p>
Benchmark fine-tunning (see blog post):
<p align="left">
<img src="result_ft.png" width="600">
</p>
## Usage
To use the model:
```python
import json
from transformers import AutoModelForCausalLM, AutoTokenizer
def predict_NuExtract(model, tokenizer, text, schema, example=["", "", ""]):
schema = json.dumps(json.loads(schema), indent=4)
input_llm = "<|input|>\n### Template:\n" + schema + "\n"
for i in example:
if i != "":
input_llm += "### Example:\n"+ json.dumps(json.loads(i), indent=4)+"\n"
input_llm += "### Text:\n"+text +"\n<|output|>\n"
input_ids = tokenizer(input_llm, return_tensors="pt",truncation = True, max_length=4000).to("cuda")
output = tokenizer.decode(model.generate(**input_ids)[0], skip_special_tokens=True)
return output.split("<|output|>")[1].split("<|end-output|>")[0]
# We recommend using bf16 as it results in negligable performance loss
model = AutoModelForCausalLM.from_pretrained("numind/NuExtract", torch_dtype=torch.bfloat16, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("numind/NuExtract", trust_remote_code=True)
model.to("cuda")
model.eval()
text = """We introduce Mistral 7B, a 7–billion-parameter language model engineered for
superior performance and efficiency. Mistral 7B outperforms the best open 13B
model (Llama 2) across all evaluated benchmarks, and the best released 34B
model (Llama 1) in reasoning, mathematics, and code generation. Our model
leverages grouped-query attention (GQA) for faster inference, coupled with sliding
window attention (SWA) to effectively handle sequences of arbitrary length with a
reduced inference cost. We also provide a model fine-tuned to follow instructions,
Mistral 7B – Instruct, that surpasses Llama 2 13B – chat model both on human and
automated benchmarks. Our models are released under the Apache 2.0 license.
Code: https://github.com/mistralai/mistral-src
Webpage: https://mistral.ai/news/announcing-mistral-7b/"""
schema = """{
"Model": {
"Name": "",
"Number of parameters": "",
"Number of max token": "",
"Architecture": []
},
"Usage": {
"Use case": [],
"Licence": ""
}
}"""
prediction = predict_NuExtract(model, tokenizer, text, schema, example=["","",""])
print(prediction)
``` |
Voidornovoid/translation_final | Voidornovoid | 2024-07-01T22:30:02Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-01T22:28:12Z | ---
license: mit
---
|
LLM-LAT/removed_backdoor | LLM-LAT | 2024-07-01T23:27:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T22:32:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kyynaama/Ahma-3B-exl2-6bpw | kyynaama | 2024-07-01T23:40:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"finnish",
"conversational",
"fi",
"dataset:Finnish-NLP/CulturaX_fi_cleaned",
"dataset:Finnish-NLP/HPLT_1.2_fi_cleaned",
"dataset:Finnish-NLP/wikipedia_20231101_fi_cleaned",
"dataset:Finnish-NLP/Reddit_fi_2006_2022",
"dataset:intfloat/multilingual_cc_news",
"arxiv:2302.13971",
"arxiv:2302.06675",
"arxiv:2305.16264",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-01T22:34:15Z | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- llama
datasets:
- Finnish-NLP/CulturaX_fi_cleaned
- Finnish-NLP/HPLT_1.2_fi_cleaned
- Finnish-NLP/wikipedia_20231101_fi_cleaned
- Finnish-NLP/Reddit_fi_2006_2022
- intfloat/multilingual_cc_news
inference: false
pipeline_tag: text-generation
---
This is the 6bpw exllamav2 quant of Finnish-NLP/Ahma-3B.
Original model card:
# Ahma-3B for Finnish
Ahma is 3B parameter decoder-only transformer model based on Meta's Llama (v1) architecture pretrained on Finnish language. Original Llama model architecture was introduced in
[this paper](https://arxiv.org/abs/2302.13971)
and first released at [this page](https://github.com/facebookresearch/llama).
What does Ahma mean? Ahma is the Finnish word for wolverine! In the Finnish Lapland, wolverines are the biggest cause of reindeer damage.
There are two different sized Ahma models, all pretrained from scratch for 139B tokens:
| Model | Context length | Layers | Dim | Heads | Params |
|:--------------------------------------------------------------------------------|:---------------|:-------|:-----|:------|:-------|
| [Ahma-3B](https://huggingface.co/Finnish-NLP/Ahma-3B) | 2048 | 26 | 3200 | 32 | 3.6B |
| [Ahma-7B](https://huggingface.co/Finnish-NLP/Ahma-7B) | 2048 | 32 | 4096 | 32 | 7.0B |
## Intended uses & limitations
This model was pretrained only in a self-supervised way, without any supervised training. You can use this model for text generation or fine-tune it for a downstream task. This model followed a 2-stage pretraining approach where single-turn instruction-following examples were mixed in with the other training data in the second stage (explained more later in this readme). Thanks to this approach, this pretrained model is already capable of instruction following, but you might get even better results if you specifically fine-tune it for instruction following or other use cases. For instruction-following fine-tuning, you should use the same prompt format showcased below.
### How to use
**Finetuning:** \
We have now added finetuning example notebook along with video! \
Notebook: https://huggingface.co/Finnish-NLP/Ahma-3B/blob/main/Finetune_Ahma_3B_example.ipynb \
Video: https://www.youtube.com/watch?v=6mbgn9XzpS4
**Inference:** \
If you want to use this model for instruction-following, you need to use the same prompt format we used in the second stage of the pretraining (basically the same format what Meta used in their Llama2 models). **Note: do not use "LlamaTokenizer" from transformers library but always use the AutoTokenizer instead, or use the plain sentencepiece tokenizer.** Here is an example using the instruction-following prompt format, with some generation arguments you can modify for your use:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
system_prompt = "Olet tekoälyavustaja. Vastaat aina mahdollisimman avuliaasti. Vastauksesi eivät saa sisältää mitään haitallista, epäeettistä, rasistista, seksististä, vaarallista tai laitonta sisältöä. Jos kysymyksessä ei ole mitään järkeä tai se ei ole asiasisällöltään johdonmukainen, selitä miksi sen sijaan, että vastaisit jotain väärin. Jos et tiedä vastausta kysymykseen, älä kerro väärää tietoa."
def format_prompt(prompt: str) -> str:
prompt = f" [INST] <<SYS>>\n{system_prompt.strip()}\n<</SYS>>\n\n{prompt.strip()} [/INST] "
return prompt
tokenizer = AutoTokenizer.from_pretrained("Finnish-NLP/Ahma-3B")
model = AutoModelForCausalLM.from_pretrained("Finnish-NLP/Ahma-3B")
# use the custom prompt format function or the chat template feature in the tokenizer to format your inputs
# prompt = format_prompt("Mitä hyötyjä pienet avoimen lähdekoodin kielimallit tuovat?")
# inputs = tokenizer(prompt, return_tensors="pt")
messages = [
{
"role": "system",
"content": system_prompt,
},
{"role": "user", "content": "Mitä hyötyjä pienet avoimen lähdekoodin kielimallit tuovat?"},
]
inputs = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
)
generated_ids = model.generate(
inputs,
temperature=0.6,
penalty_alpha=0.6,
top_k=4,
do_sample=True,
repetition_penalty=1.2,
min_length=5,
max_length=2048,
)
generated_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=False
)[0]
# Pienillä avoimen lähdekoodin kielimalleilla on lukuisia etuja, kuten parempi tarkkuus, nopeampi käsittelyaika ja parempi skaalautuvuus. Ne ovat myös usein edullisempia käyttää kuin kaupalliset mallit, joten ne ovat hyvä valinta pienemmille organisaatioille ja yksityishenkilöille, joilla on rajoitettu budjetti. Lisäksi ne voivat tarjota paremman joustavuuden ja mukauttamisen, koska käyttäjät voivat räätälöidä malleja vastaamaan omia tarpeitaan. Kaiken kaikkiaan pienet avoimen lähdekoodin kielimallit tarjoavat merkittäviä etuja, kuten paremman suorituskyvyn, paremman tarkkuuden, nopeamman käsittelyajan ja paremman skaalautuvuuden.
```
You may experiment with different system prompt instructions too if you like.
### Limitations and bias
The training data used for this model contains a lot of content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
To reduce toxic content, training data was filtered with a toxicity classifier but it cannot truly eliminate all toxic text.
## Training data
This model was pretrained on the combination of 14 datasets:
- [CulturaX_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/CulturaX_fi_cleaned), we cleaned Finnish split from the original [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset
- [HPLT_1.2_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/HPLT_1.2_fi_cleaned), we cleaned Finnish split from the original [HPLT v1.2](https://hplt-project.org/datasets/v1.2) dataset
- [wikipedia_20231101_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/wikipedia_20231101_fi_cleaned), we used the Finnish subset of the wikipedia (November 2023) dataset
- [Reddit_fi_2006_2022](https://huggingface.co/datasets/Finnish-NLP/Reddit_fi_2006_2022), filtered and post-processed dataset of Finnish Reddit
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
- [Project Lönnrot](http://www.lonnrot.net/)
- [Finnish parliament speeches](https://avoindata.eduskunta.fi)
- [multilingual_cc_news](https://huggingface.co/datasets/intfloat/multilingual_cc_news), we used the Finnish subset of the multilingual CC-News dataset
- [fi-news-corpus](https://github.com/nkrusch/fi-news-corpus)
- Finnish higher education public theses
- Finnish single-turn instruction-following datasets, combination of multiple originally openly licensed English datasets translated to Finnish. For example, [Ultrachat, Aya, Capybara, etc](https://huggingface.co/collections/Finnish-NLP/sft-dpo-dataset-65f55dde1139c3cd683ff035)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. To reduce toxic text, we used Finnish toxicity classifier [TurkuNLP/bert-large-finnish-cased-toxicity](https://huggingface.co/TurkuNLP/bert-large-finnish-cased-toxicity) released by TurkuNLP to classify all text examples. Classified toxicity label scores can then be used to determine how toxic the text is.
All datasets were concatenated and the whole dataset near deduplicated using MinHashLSH from [text-dedup](https://github.com/ChenghaoMou/text-dedup). Top 95% perplexity score was used as a filtering threshold to filter out the worst quality 5% of texts. To reduce amount of toxic content, the dataset was filtered to include text examples having lower than 80% score for the toxicity labels "label_identity_attack", "label_insult", "label_threat" and "label_severe_toxicity".
Finally, 20,000 text examples from each of the CulturaX, Wikipedia, Yle, STT, Suomi24, and Reddit datasets were randomly selected for evaluation dataset.
The final training dataset had 23 billion words (calculated with regex "\w+") and the evaluation dataset had 23 million words. After tokenization, the training dataset had 41 billion tokens and the evaluation dataset had 40 million tokens. For the 2-stage pretraining, training datasets are divided as follows:
The first stage:
|Dataset | Words | Ratio |
|:-----------------------------|:------------|:-------------|
|CulturaX | 12.820B | 59.88\% |
|HPLT v1.2 | 5.034B | 23.51\% |
|Suomi24 | 3.018B | 14.09\% |
|Reddit | 0.141B | 0.66\% |
|CC-News | 0.311B | 1.45\% |
|FI news corpus | 0.004B | 0.02\% |
|Project Lönnrot | 0.083B | 0.39\% |
|**TOTAL** | **21.410B** | **100.0\%** |
The second stage:
|Dataset | Words | Ratio |
|:--------------------------------------------------------------|:------------|:------------|
|CulturaX (cleaner sample using KenLM perplexity score) | 2.252B | 55.48\% |
|Wikipedia | 0.095B | 2.34\% |
|STT | 0.253B | 6.23\% |
|Yle | 0.212B | 5.22\% |
|Finnish parliament speeches | 0.021B | 0.52\% |
|Finnish higher education public theses | 0.855B | 21.07\% |
|Finnish instruction-following datasets (note: 2X upsampled) | 0.371B | 9.14\% |
|**TOTAL** | **4.059B** | **100.0\%** |
## Training procedure
### Preprocessing
Texts are tokenized using Byte Pair Encoding (BPE) using the implementation from SentencePiece splitting all numbers into individual digits and using bytes to decompose unknown UTF-8 characters. The total
vocabulary size is 64k tokens. Inputs are sequences of 2048 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. Both BOS and EOS tokens were used in the pretraining.
### 2-stage pretraining
The model was trained on TPUv4-32 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/). Training was conducted with a slightly modified Jax/Flax based [EasyLM](https://github.com/young-geng/EasyLM) framework, and inspired by the [OpenLLaMA](https://github.com/openlm-research/open_llama) project. The optimizer used was a [Lion](https://arxiv.org/abs/2302.06675).
The 2-stage pretraining approach was inspired by [MiniCPM](https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20) findings. For the first stage (85% of the entire training), we used noisier web-scraped datasets. For the second stage (15% of the entire training), we primarily used cleaner datasets and instruction-following datasets shuffled together, like in MiniCPM. The learning rate schedule for the 2-stage pretraining was Warmup-Stable-Decay (WSD). During the first stage, the learning rate schedule had a linear warmup for about 8 billion tokens to a peak learning rate of 1e-4 (note: with the Lion optimizer, the learning rate had to be about 10 times smaller than with the commonly used AdamW), followed by a stable phase where the rate of 1e-4 was kept constant. During the second stage, the learning rate schedule had a linear decay from 1e-4 to 1e-5 for the first 13 billion tokens, followed by a stable phase for the remaining tokens.
In the first stage, the model was trained for 118 billion tokens, which is about three epochs of the first-stage training data, inspired by the findings of [this paper](https://arxiv.org/abs/2305.16264). In the second stage, the model was trained for 21 billion tokens, which is about three epochs of the second-stage training data.
Thanks to the WSD learning rate schedule, you can more easily experiment with different first-stage model checkpoints. For example, you could apply the second-stage training on an earlier checkpoint or continue pretraining further before the second stage. Model checkpoints were pushed to this repository every 100,000 training steps (approximately 13 billion tokens).
- [900K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/916632fe707a7fbe341a1902ac9eacf6e5872ec9)
- [800K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/a18d46e62823b19b4a97332c0a5a62b14372a3e2)
- [700K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/2d16e05820af108582dbfcd3d25e51c6f1d5076b)
- [600K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/949f4bfba406882d5ce0343aa1242bcf901202e2)
- [500K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/359812c02839d4085d890c6db0e57796b7e48bfc)
- [400K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/62468680cb84579a7d1885f60abe6d6607f59f45)
- [300K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/0424dcc0b3dbf505f7b20cf02cb80233289ef125)
- [200K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/e415206d791aad108bed8578009bf255c1f22c91)
- [100K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/8085f7c3fba46cfdbf95a01b7a1da1587b757f8b)
## Evaluation results
### FIN-bench
This Ahma model was primarily evaluated using [FIN-bench by TurkuNLP](https://github.com/TurkuNLP/FIN-bench), and the same evaluation was carried out for other relevant Finnish models for comparison. Below are the results with 0-shot and 3-shot settings in FIN-bench:
| Benchmark | Ahma 3B (instruct prompt format) 0-shot | Ahma 7B (instruct prompt format) 0-shot | FinGPT 8B 0-shot | Viking 7B 0-shot | Poro 34B (8bit quant) 0-shot |
|:---------------------------|:----------------------------------------|:----------------------------------------|:-----------------|:-----------------|:-----------------------------|
| Analogies | 50.77 | TBA | 49.23 | 40.00 | 54.62 |
| Arithmetic | 27.64 | TBA | 33.15 | 30.16 | 30.34 |
| Cause and Effect | 59.48 | TBA | 66.01 | 58.82 | 62.74 |
| Emotions | 36.25 | TBA | 22.50 | 26.25 | 35.63 |
| Empirical Judgements | 33.33 | TBA | 27.27 | 33.33 | 49.49 |
| General Knowledge | 44.29 | TBA | 40.00 | 24.29 | 51.43 |
| HHH Alignment | 42.09 | TBA | 41.81 | 42.51 | 42.92 |
| Intent Recognition | 24.42 | TBA | 17.49 | 22.40 | 68.35 |
| Misconceptions | 46.27 | TBA | 53.73 | 53.73 | 52.24 |
| Paraphrase | 59.50 | TBA | 51.00 | 50.00 | 51.00 |
| Sentence Ambiguity | 53.33 | TBA | 51.67 | 48.33 | 50.00 |
| Similarities Abstraction | 65.79 | TBA | 60.53 | 65.79 | 60.53 |
| **Non-Arithmetic Average** | **47.55** | TBA | **46.17** | **44.42** | **52.08** |
| **Overall Average** | **36.49** | TBA | **38.93** | **36.50** | **40.00** |
| Benchmark | Ahma 3B (instruct prompt format) 3-shot | Ahma 7B (instruct prompt format) 3-shot | FinGPT 8B 3-shot | Viking 7B 3-shot | Poro 34B (8bit quant) 3-shot |
|:---------------------------|:----------------------------------------|:----------------------------------------|:-----------------|:-----------------|:-----------------------------|
| Analogies | 52.31 | TBA | 40.77 | 54.62 | 76.92 |
| Arithmetic | 44.59 | TBA | 43.63 | 45.78 | 53.68 |
| Cause and Effect | 61.44 | TBA | 64.05 | 58.17 | 67.32 |
| Emotions | 14.37 | TBA | 44.37 | 48.13 | 56.87 |
| Empirical Judgements | 38.38 | TBA | 32.32 | 43.43 | 63.64 |
| General Knowledge | 38.57 | TBA | 54.29 | 28.57 | 74.29 |
| HHH Alignment | 42.94 | TBA | 45.39 | 44.80 | 46.07 |
| Intent Recognition | 24.28 | TBA | 51.45 | 58.82 | 83.67 |
| Misconceptions | 46.27 | TBA | 52.99 | 46.27 | 52.99 |
| Paraphrase | 58.50 | TBA | 53.00 | 54.50 | 55.00 |
| Sentence Ambiguity | 53.33 | TBA | 51.67 | 53.33 | 66.67 |
| Similarities Abstraction | 72.37 | TBA | 64.47 | 73.68 | 75.00 |
| **Non-Arithmetic Average** | **47.15** | TBA | **51.19** | **50.94** | **61.96** |
| **Overall Average** | **45.73** | TBA | **46.99** | **48.07** | **57.36** |
As we can see, Ahma 3B model outperforms 2X larger models like the FinGPT 8B and Viking 7B, especially in non-arithmetic tasks in 0-shot usage. Even the 10X larger Poro 34B model, which is generally better, doesn't show a huge performance difference considering its size, and Ahma 3B actually surpasses it in some tasks. This result might be attributed to Ahma's 2-stage pretraining and the inclusion of instruct-following examples during the pretraining phase.
In a 3-shot setting, the results are more mixed. The poorer performance of Ahma 3B in 3-shot settings might be due to the use of the instruct prompt format and having only single-turn instruction-following training examples.
### MTBench Finnish
This Ahma model was also evaluated using [MTBench Finnish by LumiOpen](https://github.com/LumiOpen/FastChat/tree/main/fastchat/llm_judge) even though this Ahma model is not fine-tuned for chat. Since the MTBench evaluates also multi-turn chats while Ahma models were only pretrained with single-turn instruction following examples, we have reported MTBench Finnish results separately for their single-turn and multi-turn evaluation examples. [Poro 34B Chat](https://huggingface.co/LumiOpen/Poro-34B-chat) model's results are copied from their model card for comparison.
| Benchmark | Ahma 3B (instruct prompt format) single-turn | Ahma 3B (instruct prompt format) multi-turn | Ahma 7B (instruct prompt format) single-turn | Ahma 7B (instruct prompt format) multi-turn | Poro 34B Chat multi-turn |
|:--------------------|:---------------------------------------------|:--------------------------------------------|:---------------------------------------------|:--------------------------------------------|:-------------------------|
| Coding | 1.00 | 1.00 | TBA | TBA | 3.05 |
| Extraction | 2.00 | 1.55 | TBA | TBA | 6.05 |
| Humanities | 4.05 | 3.25 | TBA | TBA | 9.6 |
| Math | 3.00 | 2.20 | TBA | TBA | 1.25 |
| Reasoning | 2.90 | 2.45 | TBA | TBA | 3.65 |
| Roleplay | 4.80 | 4.90 | TBA | TBA | 7.0 |
| STEM | 5.10 | 4.20 | TBA | TBA | 7.65 |
| Writing | 6.60 | 3.80 | TBA | TBA | 7.6 |
| **Overall Average** | **3.68** | **2.92** | TBA | TBA | **5.73** |
As we can see, Ahma 3B model struggles with multi-turn examples, as expected, since it has only been pretrained with single-turn instruction following examples. In addition, coding performance was expectedly poor because the Ahma 3B model is not trained with code data. Ahma 3B also seemed to have problems with the fact that it started to constantly repeat the generated text in some evaluation examples, which affected the scoring. With the addition of a repetition penalty setting to the evaluation script generation method, the scores already improved significantly, so the Ahma 3B model should be used with better generation settings in real-world use compared to the settings used in this benchmark.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
 |
blockblockblock/NuExtract-bpw5-exl2 | blockblockblock | 2024-07-01T22:40:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-07-01T22:37:43Z | ---
license: mit
language:
- en
---
# Structure Extraction Model by NuMind 🔥
NuExtract is a version of [phi-3-mini](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct), fine-tuned on a private high-quality synthetic dataset for information extraction.
To use the model, provide an input text (less than 2000 tokens) and a JSON template describing the information you need to extract.
Note: This model is purely extractive, so all text output by the model is present as is in the original text. You can also provide an example of output formatting to help the model understand your task more precisely.
Try it here: https://huggingface.co/spaces/numind/NuExtract
We also provide a tiny(0.5B) and large(7B) version of this model: [NuExtract-tiny](https://huggingface.co/numind/NuExtract-tiny) and [NuExtract-large](https://huggingface.co/numind/NuExtract-large)
**Checkout other models by NuMind:**
* SOTA Zero-shot NER Model [NuNER Zero](https://huggingface.co/numind/NuNER_Zero)
* SOTA Multilingual Entity Recognition Foundation Model: [link](https://huggingface.co/numind/entity-recognition-multilingual-general-sota-v1)
* SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1)
## Benchmark
Benchmark 0 shot (will release soon):
<p align="left">
<img src="result.png" width="600">
</p>
Benchmark fine-tunning (see blog post):
<p align="left">
<img src="result_ft.png" width="600">
</p>
## Usage
To use the model:
```python
import json
from transformers import AutoModelForCausalLM, AutoTokenizer
def predict_NuExtract(model, tokenizer, text, schema, example=["", "", ""]):
schema = json.dumps(json.loads(schema), indent=4)
input_llm = "<|input|>\n### Template:\n" + schema + "\n"
for i in example:
if i != "":
input_llm += "### Example:\n"+ json.dumps(json.loads(i), indent=4)+"\n"
input_llm += "### Text:\n"+text +"\n<|output|>\n"
input_ids = tokenizer(input_llm, return_tensors="pt",truncation = True, max_length=4000).to("cuda")
output = tokenizer.decode(model.generate(**input_ids)[0], skip_special_tokens=True)
return output.split("<|output|>")[1].split("<|end-output|>")[0]
# We recommend using bf16 as it results in negligable performance loss
model = AutoModelForCausalLM.from_pretrained("numind/NuExtract", torch_dtype=torch.bfloat16, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("numind/NuExtract", trust_remote_code=True)
model.to("cuda")
model.eval()
text = """We introduce Mistral 7B, a 7–billion-parameter language model engineered for
superior performance and efficiency. Mistral 7B outperforms the best open 13B
model (Llama 2) across all evaluated benchmarks, and the best released 34B
model (Llama 1) in reasoning, mathematics, and code generation. Our model
leverages grouped-query attention (GQA) for faster inference, coupled with sliding
window attention (SWA) to effectively handle sequences of arbitrary length with a
reduced inference cost. We also provide a model fine-tuned to follow instructions,
Mistral 7B – Instruct, that surpasses Llama 2 13B – chat model both on human and
automated benchmarks. Our models are released under the Apache 2.0 license.
Code: https://github.com/mistralai/mistral-src
Webpage: https://mistral.ai/news/announcing-mistral-7b/"""
schema = """{
"Model": {
"Name": "",
"Number of parameters": "",
"Number of max token": "",
"Architecture": []
},
"Usage": {
"Use case": [],
"Licence": ""
}
}"""
prediction = predict_NuExtract(model, tokenizer, text, schema, example=["","",""])
print(prediction)
``` |
Rafaelhwejj/Ia_Test | Rafaelhwejj | 2024-07-01T22:37:44Z | 0 | 0 | null | [
"license:bsl-1.0",
"region:us"
] | null | 2024-07-01T22:37:44Z | ---
license: bsl-1.0
---
|
CennetOguz/cooking_blip2_5 | CennetOguz | 2024-07-01T22:41:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T22:41:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tornet-ml/tornado_detector_baseline_v1 | tornet-ml | 2024-07-02T15:04:13Z | 0 | 0 | null | [
"weather",
"tornadoes",
"arxiv:2401.16437",
"license:mit",
"region:us"
] | null | 2024-07-01T22:42:24Z | ---
license: mit
tags:
- weather
- tornadoes
---
Model described in "[A Benchmark Dataset for Tornado Detection and Prediction using Full-Resolution Polarimetric Weather Radar Data](https://arxiv.org/abs/2401.16437)"
For instruction on how to use, visit [https://github.com/mit-ll/tornet](https://github.com/mit-ll/tornet)
---
license: mit
---
```
DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.
This material is based upon work supported by the Department of the Air Force under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of the Air Force.
© 2024 Massachusetts Institute of Technology.
The software/firmware is provided to you on an As-Is basis
Delivered to the U.S. Government with Unlimited Rights, as defined in DFARS Part 252.227-7013 or 7014 (Feb 2014). Notwithstanding any copyright notice, U.S. Government rights in this work are defined by DFARS 252.227-7013 or DFARS 252.227-7014 as detailed above. Use of this work other than as specifically authorized by the U.S. Government may violate any copyrights that exist in this work.
``` |
blockblockblock/NuExtract-bpw4.6-exl2 | blockblockblock | 2024-07-01T22:49:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-07-01T22:47:11Z | ---
license: mit
language:
- en
---
# Structure Extraction Model by NuMind 🔥
NuExtract is a version of [phi-3-mini](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct), fine-tuned on a private high-quality synthetic dataset for information extraction.
To use the model, provide an input text (less than 2000 tokens) and a JSON template describing the information you need to extract.
Note: This model is purely extractive, so all text output by the model is present as is in the original text. You can also provide an example of output formatting to help the model understand your task more precisely.
Try it here: https://huggingface.co/spaces/numind/NuExtract
We also provide a tiny(0.5B) and large(7B) version of this model: [NuExtract-tiny](https://huggingface.co/numind/NuExtract-tiny) and [NuExtract-large](https://huggingface.co/numind/NuExtract-large)
**Checkout other models by NuMind:**
* SOTA Zero-shot NER Model [NuNER Zero](https://huggingface.co/numind/NuNER_Zero)
* SOTA Multilingual Entity Recognition Foundation Model: [link](https://huggingface.co/numind/entity-recognition-multilingual-general-sota-v1)
* SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1)
## Benchmark
Benchmark 0 shot (will release soon):
<p align="left">
<img src="result.png" width="600">
</p>
Benchmark fine-tunning (see blog post):
<p align="left">
<img src="result_ft.png" width="600">
</p>
## Usage
To use the model:
```python
import json
from transformers import AutoModelForCausalLM, AutoTokenizer
def predict_NuExtract(model, tokenizer, text, schema, example=["", "", ""]):
schema = json.dumps(json.loads(schema), indent=4)
input_llm = "<|input|>\n### Template:\n" + schema + "\n"
for i in example:
if i != "":
input_llm += "### Example:\n"+ json.dumps(json.loads(i), indent=4)+"\n"
input_llm += "### Text:\n"+text +"\n<|output|>\n"
input_ids = tokenizer(input_llm, return_tensors="pt",truncation = True, max_length=4000).to("cuda")
output = tokenizer.decode(model.generate(**input_ids)[0], skip_special_tokens=True)
return output.split("<|output|>")[1].split("<|end-output|>")[0]
# We recommend using bf16 as it results in negligable performance loss
model = AutoModelForCausalLM.from_pretrained("numind/NuExtract", torch_dtype=torch.bfloat16, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("numind/NuExtract", trust_remote_code=True)
model.to("cuda")
model.eval()
text = """We introduce Mistral 7B, a 7–billion-parameter language model engineered for
superior performance and efficiency. Mistral 7B outperforms the best open 13B
model (Llama 2) across all evaluated benchmarks, and the best released 34B
model (Llama 1) in reasoning, mathematics, and code generation. Our model
leverages grouped-query attention (GQA) for faster inference, coupled with sliding
window attention (SWA) to effectively handle sequences of arbitrary length with a
reduced inference cost. We also provide a model fine-tuned to follow instructions,
Mistral 7B – Instruct, that surpasses Llama 2 13B – chat model both on human and
automated benchmarks. Our models are released under the Apache 2.0 license.
Code: https://github.com/mistralai/mistral-src
Webpage: https://mistral.ai/news/announcing-mistral-7b/"""
schema = """{
"Model": {
"Name": "",
"Number of parameters": "",
"Number of max token": "",
"Architecture": []
},
"Usage": {
"Use case": [],
"Licence": ""
}
}"""
prediction = predict_NuExtract(model, tokenizer, text, schema, example=["","",""])
print(prediction)
``` |
sruly/phi-s3 | sruly | 2024-07-01T22:47:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T22:47:12Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** sruly
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
habulaj/8577362803 | habulaj | 2024-07-01T22:49:33Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:49:26Z | Entry not found |
ezrraaaa/chloe | ezrraaaa | 2024-07-01T22:49:54Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:49:31Z | Entry not found |
Ramikan-BR/Codama-8b-v0 | Ramikan-BR | 2024-07-02T12:11:19Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T22:50:52Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
middles/retest_kanji_more_epochs | middles | 2024-07-01T22:51:04Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:51:04Z | Entry not found |
habulaj/9555379811 | habulaj | 2024-07-01T22:52:57Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:52:54Z | Entry not found |
valerielucro/mistral_gsm8k_sft_v2_epoch3 | valerielucro | 2024-07-01T22:54:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T22:53:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
blockblockblock/NuExtract-bpw4.2-exl2 | blockblockblock | 2024-07-01T22:58:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-07-01T22:56:30Z | ---
license: mit
language:
- en
---
# Structure Extraction Model by NuMind 🔥
NuExtract is a version of [phi-3-mini](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct), fine-tuned on a private high-quality synthetic dataset for information extraction.
To use the model, provide an input text (less than 2000 tokens) and a JSON template describing the information you need to extract.
Note: This model is purely extractive, so all text output by the model is present as is in the original text. You can also provide an example of output formatting to help the model understand your task more precisely.
Try it here: https://huggingface.co/spaces/numind/NuExtract
We also provide a tiny(0.5B) and large(7B) version of this model: [NuExtract-tiny](https://huggingface.co/numind/NuExtract-tiny) and [NuExtract-large](https://huggingface.co/numind/NuExtract-large)
**Checkout other models by NuMind:**
* SOTA Zero-shot NER Model [NuNER Zero](https://huggingface.co/numind/NuNER_Zero)
* SOTA Multilingual Entity Recognition Foundation Model: [link](https://huggingface.co/numind/entity-recognition-multilingual-general-sota-v1)
* SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1)
## Benchmark
Benchmark 0 shot (will release soon):
<p align="left">
<img src="result.png" width="600">
</p>
Benchmark fine-tunning (see blog post):
<p align="left">
<img src="result_ft.png" width="600">
</p>
## Usage
To use the model:
```python
import json
from transformers import AutoModelForCausalLM, AutoTokenizer
def predict_NuExtract(model, tokenizer, text, schema, example=["", "", ""]):
schema = json.dumps(json.loads(schema), indent=4)
input_llm = "<|input|>\n### Template:\n" + schema + "\n"
for i in example:
if i != "":
input_llm += "### Example:\n"+ json.dumps(json.loads(i), indent=4)+"\n"
input_llm += "### Text:\n"+text +"\n<|output|>\n"
input_ids = tokenizer(input_llm, return_tensors="pt",truncation = True, max_length=4000).to("cuda")
output = tokenizer.decode(model.generate(**input_ids)[0], skip_special_tokens=True)
return output.split("<|output|>")[1].split("<|end-output|>")[0]
# We recommend using bf16 as it results in negligable performance loss
model = AutoModelForCausalLM.from_pretrained("numind/NuExtract", torch_dtype=torch.bfloat16, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("numind/NuExtract", trust_remote_code=True)
model.to("cuda")
model.eval()
text = """We introduce Mistral 7B, a 7–billion-parameter language model engineered for
superior performance and efficiency. Mistral 7B outperforms the best open 13B
model (Llama 2) across all evaluated benchmarks, and the best released 34B
model (Llama 1) in reasoning, mathematics, and code generation. Our model
leverages grouped-query attention (GQA) for faster inference, coupled with sliding
window attention (SWA) to effectively handle sequences of arbitrary length with a
reduced inference cost. We also provide a model fine-tuned to follow instructions,
Mistral 7B – Instruct, that surpasses Llama 2 13B – chat model both on human and
automated benchmarks. Our models are released under the Apache 2.0 license.
Code: https://github.com/mistralai/mistral-src
Webpage: https://mistral.ai/news/announcing-mistral-7b/"""
schema = """{
"Model": {
"Name": "",
"Number of parameters": "",
"Number of max token": "",
"Architecture": []
},
"Usage": {
"Use case": [],
"Licence": ""
}
}"""
prediction = predict_NuExtract(model, tokenizer, text, schema, example=["","",""])
print(prediction)
``` |
habulaj/222736195847 | habulaj | 2024-07-01T22:57:26Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-01T22:57:18Z | Entry not found |
lucasdozie/openHermes_ggml-model-Q4_K_M | lucasdozie | 2024-07-01T22:57:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T22:57:49Z | ---
license: apache-2.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.